Copyright Chris Johnson, 1997.
In order to reduce download times, all of the images and illustrations for this course are included in the Lecture notes.

Chris Interactive Systems Design


Chris Johnson

This course introduces Interactive Systems Design. It is aimed at first year undergraduates and its scope is intentionally very broad. For a more detailed introduction to the problems of commercial interface development, see my course on User Interface Design for the Windows Environment. For an even broader discussion see my course on The Social Aspects of Computing.

On completing this course in Interactive Systems Design, you should understand:

These notes supplement the briefer bullet points that structure the lecture material (see the Course Index ). The following two books are also recommended as background:

The Norman book is a highly entertaining introduction to the real-world problems that are caused by poor interface design. The Preece text is a more detailed handbook that will be useful for subsequent HCI courses.

Introduction and Motivation

Why bother?
This chapter presents some of the motivation for and background to Human Computer interaction (HCI) and Interactive Systems Design, in particular. We will discuss broad trends in the development of computer applications. These developments have created the comparatively recent problems associated with mass-market systems. They have also created many niche markets where the users are specialists in their particular field but have little or no interest in information technology.

Here are a few reasons why we should `bother' about HCI:

Textbooks on human computer interaction are full of jargon. Here are a few of the more general terms that you might come across in the rest of this course.

HCI - Human Computer Interaction is concerned with studying and improving the many factors that influence the effectiveness and efficiency of computer use. It combines techniques from psychology, sociology, physiology, engineering, computer science, linguistics...

Ergonomics is the study of work. The term is most widely used in the United Kingdom and Europe, in contrast to the United States and the Pacific basin where the term `Human Factors' is more popular (see below). Ergonomics has traditionally involved the design of the `total working environment'; this includes the height of the chair, table etc. Health and safety legislation, such as the UK Display Screen Equipment Regulations (1992), is increasingly blurring the distinction between HCI and ergonomics. In order to design effective user interfaces, we must consider wider working practices. For instance, the design of a tele-sales system must consider the interaction between the computer application, the telephone equipment and any additional paper documentation.

Human Factors is used to describe the study of user interfaces in their working context. It addresses the `entire person' and includes:

It has much in common with ergonomics but often is used to refer to HCI in the context of safety-critical applications. Physiological problems etc have a greater potential for disaster in these systems.

`Usability' and `ease of use' are often put in quotation marks. They are too vague to be meaningful. Do we mean that a system is easy to use for novices or experts? Do we mean that they have low learning times or lead to few errors? As a rule of thumb, if you claim a system is easy to use there will always be at least one client or user who will contradict you. To avoid this, it is useful to back claims with more specific evidence. This may be increasingly important as the market becomes increasingly discriminating.

Historical Context

The Middle Ages

The early history of computing can be traced back to the narrow aims of mathematicians, logicians and astronomers. They had particular calculations that needed to be performed.

The Persian astrologer, Al-Kashi (1393-1449) built a device to calculate the conjunction of the planets. Records of this work survived and were transported to Europe, although the device itself was lost. The German mathematician, Wilhelm Schickard (1592-1635) developed a much less sophisticated tool to perform simple addition and subtraction. The Schickard machine was destroyed during the 30 Years War. Blaise Pascal (1612-1662) was forced to replicate much of Schickard's work but only succeeded in building an even more simplified version of his machine.

There was no gradual improvement in our knowledge over time. War, famine, plague interrupted the development of mechanical computing devices. This, combined with the primitive nature of the hardware, meant that user interfaces were almost non-existent. The systems were used by the people who built them. There was little of no incentive to improve HCI.

The Eighteenth And Nineteenth Century.

The agricultural and industrial revolutions in Western Europe created the need for external markets and external sources of raw materials. This greatly increased the level of trade that was already conducted for spices, gold, slaves etc. This, in turn, led to a rapid expansion in the merchant navies maintained by many countries. In the past, the captains of these ships relied upon local knowledge and expertise. They always plied the same route. As trade developed, this expertise became less important. Ships were sent to the cargo could, rather than vice versa. As a result, there was an increasing concern to produce accurate maps and navigation charts. These involved the calculation of precise distances, longitudes etc.

The demand for navigational aids fuelled the development of computing devices. Babbage's (1791-1871) early attempts were funded by the Navy Board. As in previous centuries, the difference engine was designed to calculate a specific function (6th degree polynomials):

a + bN + cN^{2} + dN^{3} + eN^{4} + fN^{5} + gN^{6}
In contrast, however, Babbage's second machine was a more general computer. This created the problem of how to supply the machine with it's program. Punched cards were used and became perhaps the first solution to a user interface problem. The idea was so popular that this style of interaction dominated computer use for the next century.

The Early Twentieth Century.

The economic pressures for trade increased with the rise of mass production techniques on the east coast of the United States. This also had the effect of drawing migrants from famines in both Ireland and Scandinavia. The rapid influx of people caused severe problems for the United States government. They wanted to monitor this flow in order to avoid the introduction of epidemics from particular parts of the world. They were also concerned to build a profile of the population for tax reasons. As a result, Herman Hollerith (1860-1929) was recruited by the American census office to develop a computational device to calculate general statistics for the immigrant population.

These arely attempts led to the foundation of the Computer-Tabulating-Recording Company (1911). This was possibly the first and the biggest computer company. In 1914, Thomas J. Watson (Snr) joins and builds it into the International Business Machine's Corporation. IBM provide greater detail on their early history.

The important point here is that economic and political factors were intervening to create a greater market for computing devices. The term `computer' was originally used to describe the people who manually performed these calculations in the early twentieth century. In these early machines, the style of interaction was still based around the techniques pioneered in Babbage's analytical engine. Sequences of instructions were produced on punched cards. These were entered in batch mode, the jobs were prepared in advance and `interaction' was minimal.

The Mid Twentieth Century.

The Second World War created another set of `narrow' applications for computing devices. In particular, Alan Turing was employed to break the German encryption techniques. The Colossus (1943) was perhaps the first truly interactive computer. The operator could type input through a keyboard and gain output via a teleprinter.

Many of the Colossus techniques were also introduced in the ENIAC machine produced by J.W. Mauchly and J.P. Eckert in the United States. As with Colossus, the impetus for this work came from the military. In this case they were interested in ballistic calculations. To program the machine, you had to physically manipulate 200 plugs and 100-200 relays. Here is a picture of the Manchester Mark I from about this period (warning: this is a high resolution image and may take some time to download).

By this time, the first machine languages were beginning to appear. These systems were intended to hide the details of the underlying hardware from programmers. In previous approaches, you were required to understand the physical machine. This for the first time created a new class of novice users. People who wanted to learn how to program but who did not want a detailed understanding of the underlying mechanisms.

Turning Points


Before this point, personal computers were used by enthusiasts. They were sold in kits and were distributed through magazines and electronic shops. This meant that their user population consisted almost entirely of experts. They understood the underlying hardware and software mechanisms because they had built most of it. Many people thought that they were `toys'. In the late seventies, this attitude began to change as the demand for their low-end systems began to increase.

In 1981, IBM introduced their first PC together with DOS (Disk Operating System) . Little has changed in the underlying architecture of this system since it's introduction. The relatively low cost and the ease with which small-scale `clusters' could be built (even if they weren't networked) vastly expanded the user population. A cycle set in, where more people were introduced to computers. Increasing amounts of work were transferred to these systems and this forced yet more people to use the applications. As a result, `casual users' began to appear for the first time. These are people whose work occasionally requires the use of a computer but who spend most of their working life away from a terminal. This user group found, and still find PC's hard to use. In particular, the textual language required to operate DOS is perceived to be complex and obscure.


In 1982, XEROX introduced their STAR user interface. This marks what many people believe as the beginning of HCI as a conscious design activity by software companies. As a response to the increasing use of PC's by casual users and in office environments, Xerox began to explore more `intuitive' means of presenting the files, directories and devices that were represented by obscure pieces of text in Dos. Files were represented by icons and were deleted by dragging them over a wastebasket.

Initial attempts to support the `desktop metaphor' pushed graphical facilities and processor speeds to their limit. The Apple company had been founded by Steve Jobs & Steve Wozniak in 1976. Initially, they produce a series of kit machines similar to those that led to the IBM PC. They hit upon the idea of pushing the code needed to represent the desktop into hardware. Graphics and device handling were burned into ROM (read only memory). This led to a higher degree of consistency because it became less easy to change the look and feel of the interface. Apple provide greater detail on their early history.

The Future

The history of computation has seen a number of major themes: All of these trends indicate the user interface design will of critical importance in an increasingly competitive marketplace.

A number of future directions can be predicted for the development of user interfaces:

Without design skills we will lose the opportunities that are being created by this technology. Many multimedia `titles' have already been abandoned because they do not meet the user requirements. The remainder of this course will provide you with the user interface design techniques that are needed to make the most of these emerging technologies.

Never Trust Designers' Intuition...

This section isn't intended as an attack on commercial software designers. Instead, it argues that you should always question designers' intuition about the usability of interactive systems. There are very few systems whose designers are typical of the user population. One of the reasons for this is the sheer diversity of many user populations:

One of the things that emerges from this analysis is that interface designers must understand their user population. They can only do this if they talk to their marketing department. Further constraints may be imposed by regulatory authorities and by government directives.

Designing For Novices?

The original idea behind the Apple Macintosh desktop was that novice and casual users should find it relatively easy to learn how to operate the system. This, as we have seen, was largely a reaction to the problems that people experienced when learning how to use the DOS command language. Apple chose to exploit a WIMP style of interaction (Windows, Icons, Mice and Pointing devices). The justification for this is that users do not have the cognitive load of remembering obscure command names. Instead frequent operations, such as deleting a file, can be done graphically. Commands are easy to find because they are accessible through menus. You don't have to remember their names because you can find them by exploiting the options under a menu heading. Apple also chose to 'grey out' items that were unavailable. This helps novice users to learn when to use particular commands.

So far, so good. An inevitable consequence of having a large novice population is, however, that some of them will carry on to become expert users. This creates problems because WIMPs can be frustrating during frequent use. The user has to move their hand away from the keyboard. They have to find the mouse, typically hidden under 20 or 30 paper documents, and pull it over to the appropriate menu etc etc. In the Macintosh this has led to the use of keyboard accelerators. By hitting chords (this involves pressing several keys at the same time), users can by-pass the menu structure and directly issue commands from the keyboard. This saves a lot of time but some of the chords are far more obscure than in DOS, (shift, command, 3, to print the image of my desktop!). The success of the Macintosh can partly be explained by the support it provides at both ends of the expertise spectrum.

Novices quickly turn into experts if your interface is well-designed.

Designing For Experts?

The UNIX operating system was essentially developed by Thompson and Ritchie for expert users. Its command line style of interaction supports high-speed interaction without the need for menu exploration. By making the command language configurable, users can create their own specialised commands. Such facilities create problems for novice users who must continually re-learn the local dialects created by generations of user-defined scripts.

In the opposite manner to the Apple Macintosh, the success of UNIX as a platform for expert users meant that increasing numbers of people were being expected to learn the system. Some of these were `novices' in the classic sense. They were intermittent users who only wanted to exploit programs or results that were generated on UNIX machines. At the same time, expert users became familiar with some of the advantages that WIMP interfaces provide for particular tasks, such as reading news or composing letters. In consequence, window managers were developed to provide the graphical environment of the Macintosh and Windows together with the flexibility of the UNIX command language.

The moral of UNIX and the Macintosh is that if your system is successful you will need to support both novices and experts. When first designing your system, however, you must be aware of the existing level of computer literacy in the target population. UNIX would not have been appropriate for many of the users who were first introduced to the Macintosh. Conversely, a WIMP style of interaction would not have been appropriate for the scientific and high-volume data entry tasks that were first supported by UNIX.

Nobody becomes an expert if your interface is poorly designed, they just stop using it.

A Model Of Interaction
Donald Norman is Emeritus Professor at Apple's research laboratories. One of his most important ideas is that human-computer interaction is based around two gulfs that separate the user from their system.

The Gulf Of Execution

Users approach a system with a set of goals: `print the letter', `send mail to my boss' etc. At a more detailed level they develop intentions: `I'll send the mail now'. These intentions have to be broken down into a series of action specifications. By this we mean the step that the user has to go through to satisfy their intentions: first I'll have to open the mail program then I'll have to edit a new message... These steps must be performed using the interface facilities provided by the system.

The model would be of little benefit if it didn't provide designers with a framework for understanding why things occasionally `go wrong' in user interfaces. For example, problems might arise if users have inappropriate goals and intentions: `I'll print out an executable file' or `I'll remove my operating system'. Other problems can arise through inappropriate action specifications: `First, I'll delete this old file, then I'll see if I can find my really important collection of e-mail addresses'. Finally, there may be problems with the interface mechanisms themselves. The bottom line from this analysis is that in order to understand good and effective interface design we must also understand the goals and intentions of our users. Mistakes, errors and frustration can occur even if we have high-quality interaction mechanisms.

The Gulf Of Evaluation

The second component of the model is the gulf of evaluation. Once the user has issued a command they must determine whether they have achieved the desired result. They must do this by observing some change in the state of the display. For instance, an icon may appear, a dialogue box may be presented or the prompt may return. Interface designers must not only implement such changes they must also carefully consider whether users will be able to interpret them correctly. It's no good presenting an icon if nobody knows what it means. Even if the user can interpret the display correctly, they must then be able to interpret whether their command has been successful. For example, when I print a document from my PC I occasionally get a message stating `Memory violation during printing'. I can interpret this as a message about a problem with my print job. I do not have sufficient information, however, to evaluate this is a serious problem of not without referring to manuals and on-line documentation.

As with the gulf of execution, the gulf of evaluation illustrates the point that usability problems can occur even in systems with well designed displays. If users cannot interpret and evaluate the information on their screen then issues of presentation and layout are irrelevant.

Different Models
One of the biggest dangers when designing a user interface is to lapse into `introspection'. Users' tasks are, typically, very different from those anticipated by most designers. For example, I frequently use the picture drawing tool in my word processing package to produce graphs and tables. The designer of this system probably never anticipated that someone would use their tool for this purpose. These different patterns of usage lead to different goals and intentions. I want to be able to measure angles in my drawing tool so that I can integrate complex graphics in my pie charts.

Why do such irritations occur? The package includes both a spreadsheet and a graphics application. I want to be able to use the drawing tools with the graphs from the spreadsheet and vice versa. My model or idea of the system is as a tool that will help me to complete my task: I want to combine graphs and pictures. The designers' model of the system was different. Their view was one in which a distinction existed between drawing a picture in the graphics tool and drawing a chart in the spreadsheet. It might have been possible to avoid this problem if they had been more aware of my model for an ideal system.

The Designer's System Model

There might seem to be a trivial distinction between the designer's view of a system and the users' model of their interface. Common sense and previous experience should help development teams to bridge this divide. Unfortunately, designers are often the last to spot usability problems. They may be so bound up in the details of implementation that they miss critical details. Many of the techniques in HCI are intended to avoid such problems. Questionnaires, prototyping, evaluations are all intended to help designers find out about the user's model of their system.

The User's Mental Model

It is important to emphasise that the users' model of a system will be very different from that of a system designer. Their view of an application is heavily influenced by their tasks, by their goals and intentions. For instance, users may be concerned with letters, documents and printers. They are, typically, less concerned about the disk scheduling algorithms and device drivers that support their system. Clearly, if a designer continues to thing in terms of engineering abstractions rather than the objects and operations in the users' task then they are unlikely to produce successful interfaces.

The biggest danger in user interface design is to pretend that you are a `typical' user.


The previous model of interaction included the `gulf of execution'. Users' aims and intentions are satisfied by operating the user interface. These aims and intentions are derived from their tasks.

A task is a high-level activity that motivates the user to operate their computer system in the first place. Identifying these tasks is a critical design activity. Traditionally, research in HCI has been heavily dominated by techniques for task analysis. Most of these approaches exploit some form of hierarchical structuring:

1. Arrange a meeting:
	1.1. Suggest a date:
	1.2. Book participants:
		1.2.1 Check participants free: if yes then to 1.3 if no then to 1.1
	1.3 Book room:
		1.3.1 Check room free: if yes then exit if no then to 1.1
These structuring techniques do not answer the question of where user tasks come from in the first place. How can designers actually find out about users' goals and objectives when operating a system? In the subsequent sections we will review some techniques for requirements elicitation. These can be used to gain information about tasks prior to the detail development of an interface.

It is important to note that task analysis is like many other design activities. It is essentially a cyclic process. The introduction of a computer system will often change the nature of a users' task completely. At its best, information technology re-distributes tasks between the system and the user. Therefore, do not rely upon previous questionnaires etc to provide evidence about operator tasks for subsequent generations of user interfaces.

The Marketing Department's Model

There are some fundamental principles in human-computer interaction. The most important of these is know your user. Their characteristics help to determine the most appropriate style of user interfaces. Their tasks must be supported by your system. It must be possible to map their goals and intentions through your user interfaces. They must be able to interpret and evaluate the displays that you present.

Having said all that, the second principle of human-computer interaction is know your marketing department. It takes time to understand your users. If the marketing department impose tight deadlines then this may be impossible. The best that you can do is exploit some basic guidelines (we'll discuss these in later sections). These are rules of thumb or heuristics that have guided the development of successful interfaces in previous systems. The worst thing that you can do is design the interface as if you were the intended user. Get other people to try your system as you develop it from initial design to final implementation.

A second reason for communicating with the marketing department is that a successful user interface can drive the commercial exploitation of your products. The recommendations of a satisfied user population provide important propaganda for continued investment in HCI. This implies that designers must maintain some contact with their users after software has been delivered. If these links are not maintained then companies sacrifice valuable opportunities for gathering evidence about future interfaces. An additional benefit of such after-delivery contacts is that they increase participation in systems development. Even horrendous displays can be well received providing the operators' feel as though they have a `stake' in the system.

User Centred Design
In the previous talks we identified some important differences between designers and users. It was argued that the designer's model of an interface may be very different from that of their users. Engineering concepts, such as bytes or records, must be mapped into user abstractions, files and folders. It was also argued that designers must appreciate the differences between different groups of users. In particular, the way in which we design a user interface can be profoundly affected by the distinctions between novices and experts.

In this section we will explain why differences exist between novices and experts. We will also explore some of the other differences in a user population that must be considered when developing and installing computer systems. In particular, we will identify the effects that perception, cognition and physiology can have upon human performance. The discussion will be pitched at a general level to provide a complete overview of the main problems that these differences can cause. It is important to emphasise that not al of this information will be relevant to all commercial problems. For instance, the developers of a mass market database system may have little or no control over the workstation layout of their users. In other contexts, particularly if you are asked to install equipment within your own organisation, these factors are under your personal control.

Once we have identified the key characteristics that distinguish different users we will be well prepared to discuss some of the more concrete ways in which they can be recruited to support the development lifecycle. The next section will present practical techniques for eliciting the perceptual, cognitive and physiological requirements that constrain many user interfaces.

Perception, Cognition And Physiology
We can characterise user resources into three categories. Perception: the way that they detect information in their environment. Cognition: the way that they process that information. Physiology: the way in which they move and interact with physical objects in their environment.


Perception involves the use of our senses to detect information. We have to make sure that people can see or hear displays if they are to use them. In some environments this causes huge problems. For instance, most aircraft produce over 15 audible warnings. It is relatively easy to confuse them under stress. Background noise can be over 100db. Although such observations may be worrying for the business traveller, what significance do they have for more general HCI design? We must ensure that signals are redundant. If we display critical information by small changes to the screen then many people will not detect the change. If you rely upon audio signals to inform users about critical events then you exclude deaf consumers. You may irritate users in large offices and baffle users who have the sound turned down.

It's no use displaying it if people can't see it or hear it in their normal working environment.


The study of cognition focuses upon two different phenomena: short and long term memory.

Short term memory has a relatively low capacity. We'll find out exactly how much in a moment. It is fast, if we have something on our mind then we can talk about it almost instantly: what do the letter HCI stand for? If we have to trawl it up from our long term memory it may involve several moments thought: name the seven dwarfs? Short term memory also has a relatively short retention period. This is because we actually have to work to keep items in it.

Long term memory, in contrast, has a relatively high capacity. As its name suggests it can store information over a much longer period of time. Access is much slower.

Clearly from the previous observations, we should like to design interfaces that make efficient use of short term memory. Users should only be required to remember a few items of information and they should not be forced to trawl back through dim and distant memories of traing programs in order to operate the system. An increasingly common trick in user interface design is to support short term memory by representing additional information on the display or on index cards. This is effectively what a menu does: it provides fast access to a list of commands that do not have to be memories. In contrast, help facilities are more like long term memory. We have to load them and trawl through them to find the information that we need.

Do not rely on help facilities to substitute for poor interface design. They can be very irritating, slow to access and rely upon good indexing.

Seven is often regarded as the `magic number' in HCI. We can see this all around us. Important information is kept within the seven item boundary. For instance, postcodes have up to seven components G12 8QT etc. In some cases, it is necessary to break this rule. In these circumstances, the information is broken up into components with less than seven items. In the United Kingdom, phone number are usually divided in this way, (0141) 339 8855. As a rule of thumb or guideline, never expect the user to hold more than seven items of information at any one time. It follows that users will have difficulty in remembering the contents of menus with over ten items. Command languages with many different options will need additional visual cues if operators are to learn them.

Why is is seven the magic number? It is as easy for users to hold seven words in short term memory as it is for them to hold seven unrelated items. Additional information can be held but only if users employ techniques such as chunking. This involves the grouping of information into meaningful sections. It can also involve the use of mnemonics and acronyms to prompt the user to recall additional detail. All of this involves work on the user's part. This can jeopardise the success of a user interface.

As mentioned, it takes effort to hold things in short term memory. We all experience a sense of relief when it is freed up. For example, you may have felt this when you finished reciting the remembered items in the previous exercise. As a result of the strain of maintaining short term memory, users often hurry to finish some tasks. They want to experience the sense of when they achieve their objective, this is called closure: This haste can lead to error. The early cash dispensers suffered from this problem. Users experienced a sense of closure when they satisfied their objective of withdrawing money. They then walked away and left their cards in the machine. As a result cash will now not be dispensed until you take your card.

An important aim for user interface design is to reduce the load on short term memory. We can do this by recording information `in world' not `in the head'. This involves the use of prompts on the display and the provision of paper documentation. Beware, however, it is very easy for users to lose these vital pieces of paper...

People make mistakes if they can't wait to finish using your system.

Knowledge, Rules and Skills

Some people may only have partial information about how to complete a task. In other words, they may only have formed part of the hierarchy shown on the previous page. This, typically, is the situation of a novice user. They will need procedural information about what to do next. Experts will have well formed task models and may not need this guidance. It follows, therefore, that for novel tasks designers may have greater flexibility in the way that they implement their interface. In more established applications, expert users will have well developed task structures and may not adapt so quickly to any changes that you might make.

Up to this point we have been rather vague about the differences that exist between different elements in a user population. A number of models, such as the one shown opposite, have been developed to provide a more precise explanation. It shows the differences between the different degrees of information that people might have about an interactive system. In the worst case, they may only be able to use general knowledge to help them understand the system. Designers can exploit this to support novice users. For example, in the Apple desktop inexperienced users can apply their general knowledge in several ways ; `To undelete a file I'll empty the wastebin...'. This is a dangerous approach, however, if this knowledge fails then users are reduced to making guesses.

The second level of interaction introduces the idea that users exploit rules to guide their use of a system. This approach is slightly more informed than the use of general knowledge. For example, users will make inferences based on previous experience. This implies that, whenever possible, designers should develop systems that are consistent. Similar operations should be performed in a similar manner. If this approach is adopted then users can apply the rules learned with one system to help them operate another. 'To print this page, I go to through file menu and select the option labelled Print one' There are two forms of consistency:

Operating a user interface by referring to rules learned in other systems can be hard work. Users have to `work out' when they can apply their expertise. It also demands a high level of experience with computer applications. Over time users will acquire the expertise that is required to operate a system. They will no longer need to think about previous experience with other systems and will become skilled in the use of the system. This typifies expert use of an application.


When we operate a system, we gradually move from general knowledge to rules and then to skills. Users with greater expertise will be able to enter the process at a higher level. Ideally, we all want to work at the skill level. We don't want to spend time thinking about use of previous systems or trawling our general knowledge. The more we work at the knowledge and rule level the more uncertain we are about things. Users don't want to be forced to make guesses. They introduce inefficiency and can consume lots of time in `repair' tasks when things go wrong, for instance if we delete a file by accident.

The more we have to think about using the interface the less cognitive and perceptual resources we will have available for our main task.


Physiology involves the study of the human anatomy. It might seem strange to include this in a course on user interface design but it can have a critical impact upon the design of a successful system.

As a minimum requirement users must be able to view the display, reach the input devices etc. A number of factors may intervene to restrict prevent operators from achieving this:

It is important to note that interfaces often tend to reflect the assumptions that their designers make about the physiological characteristics of their users. Buttons are designed so that an `average' user can easily select them with a mouse or a tracker-ball. Unfortunately, there is no such thing as an average user. Some users have the physiological capacity to make fine grained selections but other do not. Increasingly, there is international legislation to improve access through these systems. Even if systems are unaffected by these issues it is good to remember that workplace pressures, of time and concentration, may reduce the physiological ability of users.

Don't make interface objects so small that they cannot be selected by a user in a hurry, carrying a stack of books. Don't make disastrous options so easy to select that they can be started by accident

Designers often have relatively little influence on the working environments of their users. If you are lucky enough to have some power, here are a few guidelines:

It also pays to consider the possible sources of distraction in the working environment:

A number of regulatory initiatives are affecting the way in which large firms organise the use of computer equipment:

Relatively little seems to have been to enforce these regulation but it may be only a matter of time before compensation is claimed for industrial injury in this area. There are a number of urban myths about the impact of computer systems on human physiology: Previous talks have discussed the differences between designers and users. We have identified the characteristics that distinguish novices from experts. We have also discussed the finite perceptual, cognitive and physiological resources that users must deploy during interaction. Armed with this information, we can now discuss specific techniques for improving the design of human-computer interfaces.

HCI and Software Development
The previous sections have introduced the differences between designers and users. They have also identified the cognitive, perceptual and physiological limitations that affect interaction with complex systems. In this talk we will discuss the role of HCI within the software engineering lifecycle. We will introduce a model of interface development and we will link that to the `classic' stages in project management.

We will then go on to discuss the difference between guidelines and principles. Finally, we will look at more process based models of HCI design. In other words, rather than forcing designers to look at a series of rules when designing an interface, as in the guideline approach, you get them to go through a series of activities or processes.

The Software Engineering Life Cycle

The software engineering lifecycle helps to divide development into a number of different stages. Its usefulness is as a reference model and not as a detailed framework for project management. In many contexts, it may not be possible to follow all of the stages mentioned in exactly the order presented. Having said that, it does accurately describe many development projects and can be used to identify different activities during interface design: The `classic' software engineering lifecycle does not represent user interface design as a core activity. It has some role in requirements elicitation and testing. It is not clear whether the developers of such a model intended HCI to go on alongside systems engineering. This is the approach adopted by Boeing's concurrent engineering techniques. Or is it seen as a specialist activity that might occur after the implementation has been developed? Such unresolved questions have led a number of analysts to develop different models for HCI projects.

It is critically important to decide where `HCI' activities lie in the project structure. If they are relegated to the end of the development process then they are first in the firing line when budgets are cut and deadlines shortened.

The HCI Lifecycle.

One of the problems with the traditional model for software development is that it does not clearly identify a role for HCI at any point in development. User interface concerns are `mixed in' with wider development activities. This may result in one of two problems. Either HCI is ignored or it is relegated to an afterthought during the later stages of design. In either case, the consequences can be disastrous. If HCI is ignored then there is a good chance that problems will occur in the testing and maintenance stages. If HCI is relegated until late in the development cycle then it may prove very expensive to `massage' application functionality into a form that can be readily accessed by the user. In either case the cost of introducing `usability' issues will rise, the later you postpone it in the development cycle.

Williges and Williges have produced an alternative model of development to rectify the problems in the `classic' model of software engineering. Here, interface design drives the whole process. This is the current model being adopted by Apple. Software engineers are being laid off in favour of more interface consultants.

The argument is that by spotting user requirements early in the development cycle there will be less of a demand for code generation and modification. Only time will tell if this is a cost-effective strategy. In the meantime, it is important to understand the various techniques that can be used to support interface development early in the lifecycle of a project. The following pages describe techniques that can be used to gain requirements for an interactive system `straight from the horses mouth'.

The development of good user interface designs may boil down to an unattractive choice between sacking programmers to hire HCI specialists or investing in existing personnel to train them in interface design techniques.

Mechanisms for HCI in Software Development
It is important to identify the mechanisms or techniques that can be used to introduce HCI into the software development lifecycle. Since the early 1980's, most commercial organisations have introduced HCI through the use of guidelines. These are lists of rules about when and where to do things, or not to do things, in an interface. For instance, a guideline might be not to have more than ten items in a menu. Another guideline might be to avoid clutter on a graphical user interface.

The problem with guideline is that you need a large number of rules in order to cover all of the possible interface problems that might crop up. Also, it's difficult to know what to do when you have to break a guideline. For instance, what do you do if you have a menu of eleven items? An alternative approach is to develop generic principles. These provide a more abstract approach than guidelines. For example, the principle of predictability states that the user should always be able to work out the probable effects of their commands from the information on the screen. This is more abstract because it doesn't explicitly mention what the system is or what the screen looks like.

The problem with principles is that people often find them difficult to apply. How do I help the user to predict the effects of their commands? More recently, companies have been looking at the process of introducing HCI into software development. In particular, they are concerned to document the steps that they take to elicit the users requirements and to test the system. This has been largely brought about by the movement to conform with the International Standards Organisations ISO9000 standard. This sets out approved procedures for software development. Many software ourchasers now expect their suppliers to be 'ISO9000 conformant'.


The most famous set of guidelines were developed by Smith and Mosier on behalf of the Mitre Corporation. Unsurprisingly, these are know as the Smith and Mosier guidelines. They now include several thousand rules and you really need a hypertext tool to use them. They have been adapted for use by the US military and by NASA. An example of one of Smith and Mosier's guidelines is:
1.6.2 DATA ENTRY: Graphics - Drawing
When users must create symmetric graphic elements, provide a means for specifying a reflection (mirror image) of existing elements.
Several companies have also developed their own style guides. These are similar to the Smith and Mosier guidelines because they simply list do's and dont's for interface design. They are slightly different from Smith and Mosier because there are commercial motivations behind them, they are not simply intended to enhance the usability of the interface. Apple's guidelines help you to produce a system that looks and feels like other Apple products. Microsoft's Window's guidelines help you to produce a system that looks and feels like a Window's products. The point here is that once your workforce have become accustomed to one style of interface then you will be encouraged to buy other systems that are consistent with the first one. In other words, you will buy more Microsoft products, more Apple products and so on.

Guidelines and style guides help you to identify good and bad options for your interface. They also restrict the range of techniqus that you can use and still 'conform' to a particular style.

The Limitations of Guidelines.

Guidelines can be very difficult to apply. In many ways, they are only really as good as the person who is using them. This is a critical point because many companies view guidelines as a panacea. The way to improve an interface is not just to draft a set of rules about how many menu items to use of what colours make good backgrounds. This course emphasises the point that users' tasks and basic psychological characteristics MUST be taken into account. Unless you understand these factors then guidelines have no meaning. For example, the Apple guidelines state that:
``People rely on the standard Macintosh user interface for consistency.   
Don't copy other platforms' user interface elements or behaviours in the Macintosh because they may confuce users who aren't familliar with them.''
This simple guidelines glosses over all of the important points about the differences between novices and experts. Using inconsistent features removes an expert's skills in using the previous system. Unless the programmer/designer understands such additional justifications then the true importance of the guideline may be lost.

Apple recognise some of the problems in using guidelines whent they state that:

``There are times when the standard user interface doesn't cover the needs of your application.   
This is true in the following situations:
you are creating a new feature for which no element or behaviour exists.   In this case you can extend the Macintosh user interface in a prescribed way;
An existing element does almost everything you need it to, but a little modification that improves its function makes the difference to your application...''
The Apple Guidelines, go on to present a number of more generic guidelines, or principles, that can then be used to guide these novel interfaces.


Principles provide an alternative approach to guidelines. The basic idea is that you cannot possible predict all of the problems that arise during interface design. If you try to do this, you end up with thousands and thousands of guidelines. Instead, principles focus upon problems that are common to many different systems. For example, the principle of observability states that the user must be able to observe an effect on the display for all of the input that they enter. The principle of predictability states that a user should be able to predict the effects of their commands from the information displayed and a minimal knowledge of previous input.

Principles can help to design many different interfaces. They establish goals for the development team. In other words, companies can specify that designers should implement predictable and observable systems without specifying the exact means of achieving these objectives. Principles, therefore, impose less constraints than guidelines.

The problem is that, although principles provide design objectives, they don't help with the details of interface development. Also, how can one test whether or not an interface is predictable and observable?


For the last decade or so, there as been a move to introduce standrads into interface design. Initially, these focussed upon when and where to use particular pieces of hardware. For example, Systems Concepts reviewed the British Standard's Institute's standards in this area as follows:
BS EN 29241-1:1993 (ISO 9241) Part 1 General Introduction
The purpose of this standard is to introduce the multi-part standard for the ergonomic
requirements for the use of visual display terminals for office tasks and explain some of the basic
underlying principles. It describes the basis of the user performance approach and gives an
overview of all parts currently published and of the anticipated content of those in preparation. It
then provides some guidance on how to use the standard and describes how conformance to parts
of BS EN 29241 should be reported. 
Not exactly gripping stuff. The problem is that it is difficult for companies to extend this approach to interface design. As with principles and guidelines, it is impossible to describe exact 'usability' criteria for every interface and all classes of user. As we have seen, what is good for a novice may not be good for an expert.

As a result of this, a new set of standards are being produced by the International Standrads Organisation. The main thrust of this work is that companies must follow a set of procedures in order to be acredited. Requirements elicitation, which we will cover in the next session, is a necessary part of any interface development. So also is user testing, to be covered at the end of the course.

HCI and Requirements Elicitation

Gathering Information

Requirements elicitation refers to a group of techniques that can be used to establish the objectives for a user interface during the early stages of development. In particular, it is important to find out who the potential users actually are. These may be different from the people who are actually paying for the system. The objectives for these two different groups may also be different. One may have concerns over their job security. The other may have concerns over the cost-productivity consequences of the product.

Above all, it is important not to lose the support of your users during the early stages of interface development. For example, if you start off by asking questions that are simplistic or ill-informed then users may become antagonistic or irritated by someone from outside the organisation, with little knowledge of their tasks, being asked to design a system for them.

Many requirements elicitation techniques are, therefore, intended to gain maximum information about the context of the system without forcing the designer to ask stupid questions. For example, focus groups and questionnaires can be used to gather initial evidence. Once the designers has become more famillair with the general application domain, they may then use interviews and more direct techniques to gather detailed requirements.

User interface requirements form part of the more general, software engineering problem of requirements capture. Here is Steve Easterbrook's NASA course on the wider aspects of requirements engineering - thanks Steve.

Requirements Elicitation Techniques


The most obvious and widely practices technique of requirements elicitation for HCI involves the use of interviews. There are a number of reasons why this approach is so popular: In order to prepare yourself for an interview you must consider the following issues:

Interviewing techniques are only effective if you understand the subject's viewpoint. You must understand the pressures in their existing working environment and the forms of prejudice or bias that they might exhibit. DO NOT GO INTO AN INTERVIEW UNPREPARED FOR THESE INFLUENCES.

Rankine Charts.

Group meetings provide a means of avoiding the fact to face confrontations that can arise in personal interviews. The idea is that you get teams of users to discuss the limitations of their existing systems. You simply observe their discussion.

Unfortunately, any group of people will contain some individuals who are more vocal than others, Thos means that you can come out of a requirements elicitation exercise with a very biased opinion of what all of the members of the group thought. In other words, you only hear the opinions of the most `dominant' people in the group. These might not be the most significant person for the success of the whole project.

Rankine charts address this problem. You ask one of the team to draw up a seating plan that shows the names of the participants in the meeting. Each time someone in the meeting says something, the observer draws a line from the person who made the comment to their intended audience. At the end of the meeting, the number of lines from each person will give you some idea of their contribution to the discussion. After the session, you can then use one-to-one interviews to make sure that everyone's views are represented.

An added benefit of this approach is that you don't have to open your mouth until you have some idea of the feelings of the users. The meeting gives you some introduction to their views. The draw-back is that it can be difficult to control large meetings and it can also be difficult to get people to talk freely about their attitudes towards a system if an outsider is present.


As noted, it is often more cost effective to use questionnaires if you need focussed answers to specific questions. It is also possible to access the view of a geographically distributed user population in this way. Questionnaires have an important weakness in that it can be difficult for the designer to judge the amount of effort that users put into the questions. You should also anticipate a high failure rate for unreturned forms sent to key individuals in an organisation. Don't be afraid to invest small sums in tricks and incentives. `Free draws' can generate publicity and interest in design projects.

The major cost and investment with questionnaires occurs after they have been sent out. You cannot recall them to make minor adjustments or to ask additional questions. It, therefore, pays to spend a significant amount of time in preparing the forms before they are issued. It is especially useful to have a trial run. Here are some issues to consider when putting together a questionnaire:

You can't recall a questionnaire once it has been sent out.

Analysing Data For Questionnaires And Interviews

Be aware that the results of questionnaires and interviews have to be collated. This always takes much longer than you anticipate, especially if you perform large surveys or use video recording techniques. Delays can occur if you are kept waiting to speak to key personnel or if there are any managerial problems in collecting complete questionnaires.

It is important to remember that the techniques described above are likely to provide `qualitative' rather than `empirical' results. By this we mean that they will provide an impression of the potential requirements for a user interface.

Finally, it is useful to communicate the results of surveys and interviews back to the people you consulted. This again helps to keep people informed and builds their `stake' in the final interface. A further benefit is that they can often help to interpret surprising results.

At the end of the elicitation stage you should have a good idea about user requirements. The next stage in the development process is to use the findings as a means of informing the more detailed design of a user interface. Rank Xerox's QOC notation provides one technique for doing this...

Specification And Initial Design
Interviews and questionnaires provide a mixed set of results about user requirements. They do not tell you how to build and design a system. In order to do this, designers must select the most appropriate options from a range of different alternatives. Should we use a graphical style or a textual format? Should we use colour or greyscale displays? Design rationale notations, such as Rank Xerox's QOC, help to structure these development decisions.

Questions, Options And Criteria (QOC)

QOC diagrams are built by identifying the key questions that must be addressed during the development of an interactive system. The options that answer a particular question can then by linked to it. Finally, the criteria that support those options are linked by either solid or dotted lines. The solid lines indicate supporting criteria. The broken lines indicate criteria that count against a particular option.

QOC offers a number of advantages for interface development. It can be used to represent the many different forms of decisions that have to made during the development life-cycle. A further advantage is that designers are not forced to learn complex notations, as in SSADM or JSD. Boeing have successfully used this approach to communicate design decisions amongst the many different members of large development teams.

If you use design notations make sure they can be translated back into a language that the user can understand...

The design of any user interface involves the selection of preferred options from a number of competing alternatives. The benefits of using QOC are that the diagrams help to explicitly represent these options. They provide a focus for discussion and can be used in conjunction with interviewing techniques. Users can be asked to provide additional options or to assess the criteria that act for and against different alternatives. This helps to increase involvement in the development process. Perhaps more importantly it helps to make useers aware of the various constraints that effect development. Favoured criteria may not be selected because of the additional expense, regulatory constraints, installation problems etc.

To summarise, the questions options and criteria notation provides designers with a tool for mapping out the `design space' of a user interface. The process of identifying key questions and of eliciting the criteria for competing options is essential during the initial stages of development. The products of this process, the diagrams themselves, are equally valuable for communicating the justifications behind design decisions.

In order to fully explore the `design space' of a user interface, we need to understand the strengths and weakness of different dialogue styles. We need to be able to make informed judgements about graphical, tabular and textual user interfaces. The following sections will address this issue.

Text Based Interfaces

Dialogues Styles.
The term `dialogue style' refers to the way in which users provide input and systems present output over time. Any particular interface is liable to exploit a number of dialogue styles. For example, menu-based interfaces use textual interaction to supply filename or device options. Conversely, many textual interfaces exploit graphical interaction techniques in order to support multi-tasking in different windows. Designers must determine which interaction style is most appropriate in any particular context. In order to do this, they must assess the users' task requirements (introduced in the second and third talks) using elicitation techniques (described in the fourth talk).

In many cases, designers are constrained in their choice of dialogue style. You may be specifically commissioned to implement a graphical user interface. Alternatively, the hardware and presentation facilities provided by a customer may only be capable of supporting text-based interaction. Within such constraints, however, there is still considerable scope for the use of different interaction techniques. For example, the predominantly graphical interaction style provided by the Windows program manager still supports command language short-cuts, similar to the Macintosh accelerators mentioned earlier. Given limited resources, it is important to target such additional support to the most `critical' areas within the interface. Otherwise, we might end up implementing both textual and graphical styles of interaction for each interfaces.

How To Select A Dialogue Style.

What characteristics must be considered when selecting an appropriate dialogue style?

Textual Input?
In spite of the advancing application of menus and icons, text-based dialogues continue to be the main way in which people interact with computer systems. Typing speeds are on average must faster than the times that can be achieved for similar operations using click \& point operations. Direct comparisons are difficult to make. The main differences in time do not seem simply to arise from the different physiological demands. Proficient typists on QWERTY keyboards can manage approximately eighty to one hundred words per minute. The time to pick up a mouse and move it to a specific location depend upon the position of the cursor and the size of the target object. Instead, they stem from the different ways in which users remember textual command languages and menu based interfaces. Expert users of graphical systems rely upon the prompts provided by the system, expert command language users will have more of this information available in long term memory. This enables them to use techniques such as `typeahead', i.e., not waiting for the prompt before issuing additional commands.

Design Aims For Command Languages

The aims of a well-designed text-based command language are:

Programming Textual Interfaces

One of the biggest advantages of text based command languages are their scripting facilities. Numerous attempts have been made to develop graphical, iconic interfaces to programming languages and most have failed dismally. It is relatively easy to represent objects in a graphical form, such as files, printers or folders. It is less easy to devise graphical representations for processes, such as sorting, filtering, deleting.

If you get involved in the development of such applications it is worth while supporting scripting facilities whenever possible. These techniques provide a flexible way for users to progress from novice to expert levels of skill. This transition will only take place, however, if the individual components of your language are easily understood. For example, the following command at one installation was intended to get a printout on unlined paper from an IBM 3800 laser printer: CP TAG DEV E VTSO.LOCAL 2 OPTCD=J F=3871 X=CG12

Forming Command Languages

The names of individual commands should be combined according to some reasonable set of rules or grammar. There are a number of set conventions for this:


It is absolutely vital that command languages be consistent. Users will not be able to transfer their skills from previous systems to operate yours if it is inconsistent with other applications. For this reason, it is a good idea to find out what systems your users have been exposed to in the past. Consistency applies at many different levels in a command languages. The next paragraph will discuss labelling rules. There is also a need for consistency in the way that a command line is terminated. A number of systems have had problems with internal inconsistency: Press RETURN to continue, Press ENTER to exit(!) etc. The formation of the language should also be consistent. For instance, in one convention command names should always be followed by the objects that they apply to Display file. This typifies most command languages. In another convention, commands must always follow objects File Display. This typifies most graphical user interfaces. The point here is that both approaches are acceptable providing users are not forced to switch between them at arbitrary moments during interaction.

Naming Rules

As mentioned in earlier sections, it is important that command languages should be easy to read and to write. In order for a command to be easy to read it must bear some relationship to the action that is to be performed. Users must be able to decode its meaning. In order for commands to be easy to write they must be able to perform the reverse encoding. In a pathological case, it would not be a good idea to re-name the delete command as copy. A further aim is that command language should be precise and compact. This involves the use of conventions to truncate or abbreviate command names. A number of conventions have been proposed and used to shorten command names: In order for a command language to be successful, there must be some means of gaining feedback during the development process. The final section will look at evaluation techniques in more detail but it is entirely possible to use interviewing and questionnaire to gain users' opinions about potential input languages.

Textual Output?
All interface designers must be aware of how to develop textual interfaces. Even if they are primarily involved in the development of graphical systems, they will still have to present some textual information. Many of the observations about the development of command languages still stand for the presentation of textual information. It is clearly important that the interface uses terms that the user will recognise. My Macintosh continues to present error messages of the form Virtual memory error: -23322111f. This may be useful to me as an engineer but it is extremely irritating as a regular user of the system.

Design Aims For Textual Displays

The aims of a well-designed text-based displays are:

Font Selection

Typography is the term used to describe the style and appearance of printed matter. More recently, the term has been extended to describe the appearance of text in computer displays. The style of text is determined by its font. Fonts describe the shape and formation of each character. This document is printed in Times Roman. Other families include Geneva, Helvetic, Schoolbook etc. Most people are familiar with them from direct manipulation text editors. Great care is needed when making an appropriate font selection. There are a number of guidelines to aid this process. For example, whenever possible use a serif font in running text. For instance, this Times Roman is a serif font. In simple terms, serif fonts include curls and additional marks at the ends of each letter. These are intended to lead the reader's eye along the line. In contrast, sans-serif (e.g., Geneva) fonts appear plain and stark. They can effectively be used for headings and bulleted points. Beware, however, many users find the use of these plain styles to be slightly patronising. Serif fonts on the other hand often give the impression of `official documents' or newspapers which is where they originated.

Here is a Java Applet that should enable you to select different fonts:

Fonts are stored in one of two ways. Thy can be stored as bitmap images. The characters are designed either on paper or by using a font design tool. They are then scanned or downloaded into a different bitmap for each character. This has the advantage that relatively little computation is involved in translating a document into the corresponding sequence of bitmaps. It has the disadvantage that quality is lost as the bitmaps are enlarged to different point sizes (see below). This occurs because bitmaps are formed from discrete pixel elements. Any curves will, therefore, appear to be jagged as their size is increased; they show the discontinuity between pixels. Alternatively, fonts can be stored as a number of mathematical functions. This has the advantage that quality is preserved as the scale of the character is increased. The continuous nature of the mathematical functions avoids the 'stair-case' effect that you get when scaling bitmap images. This has the disadvantage that there is a computational overhead associated with fonts that are transformed in this way. What this amounts to is that if you use an obscure font on your system, the user may not have the functions in place to provide a high-quality image. In these circumstances, the display device will typically default to scaling a bitmap with the resulting loss of quality. The moral is - just because it looks good on your display does not mean that it will on the users...

Point Size

The size of a character is determined by its point size. A point is defined to be 1/72 of an inch Clearly some sizes are more


than others. It is important to remember that by altering the point size `in line' you may reduce the benefits provided by the use of serif fonts.

Tasteful Design...

It is usually not a good idea to mix font and point sizes in the same line unless you are trying to achieve a very particular effect. Typically, changes between these styles are used to indicate some additional information, such as


. In order for users to understand the meanings associated with different style changes you should avoid using moire than four different point sizes and more than three fonts in the same document. This should be regarded as a rule of thumb, there may be instances where more styles are necessary. In such cases you should conduct some form of evaluation to determine whether your design is having the desired effect.


Many of the guidelines that have been presented for textual interfaces apply equally to both printed and to computer generated information. Here are some of the issues that you must consider:

Forms provide a structure for text-based interfaces. Labels are used to prompt users for appropriate input. Command line interfaces lack this structure. Users are often forced to make guesses about possible input sequences. A further advantage of forms is that they are less constraining than menus. Rather than selecting an item from a limited range of options, you are free to choose the text that you want to enter in each field. It would be difficult to provide a menu that lists all of the possible surnames that could be entered into a database. It is also possible to check for erroneous data in form based interfaces. In most cases, users will be warned if numeric data was entered in a name field.

It is important to note that forms dominate data entry applications. These are high-volume interfaces where the speed of data entry is critical. It is, therefore, extremely important that designers provide an optimal layout. Tabbing between different entries in a form can be very frustrating. As with any other dialogue style, optimal designs can only be achieved by closely considering the users' task and not simply by analysing application functionality.

Many companys offer tools that support the creation of form based interfaces. As a consequence, they are often used in situations where command languages or menus might be more appropriate. In order to make informed decisions about when and where to use forms, it is important that we understand their strengths and weaknesses. Here are some advantages of form-based interaction:

In contrast, here are some weaknesses of this style of interaction:

Design Aims For Form Filling

The design aims for form filling are as follows:

Appearance And Composition.

The appearance of a form can be divided into three components. The first of these is the field panel. The field panel prompts the user and provides some indication of appropriate input formats. In order to enter information, the user must typically select the field that they currently want to edit. This can be done using tab keys, arrows, mouse selection etc. The control panel, see below, provides information about this procedure. Once the field has been selected, the label associated with each entry should provide unambiguous cues about the information that is required. The naming techniques for each field should follow the guidelines on command naming. Remember, that users will have to decode this information.

Space is likely to be limited on the display and so it may be necessary to provide additional sources of reference about the data to be entered in each field. If necessary, help facilities and paper documentation should be provided to reiterate the on-screen prompts. The form should provide information about the maximum size of input for each field. It is absolutely vital that you provide feedback if input has been truncated. Searching for user names on partial strings can be ``a risky business''. It can be extremely expensive to rebuild systems once a data set is corrupted and invalid entries have been stored. Such errors can be reduced if the same editing functions are provided throughout all of the forms in a system.

Data fields should be grouped either according to:

Within each field, the convention seems to be to use left justification. This helps to ensure that each input starts in the same column on the screen. Finally, it is important to leave sufficient space between groups for users to easily identify boundaries between subjects.

The context panel provides users with information about the purpose of a particular form. It should enable them to identify what they were doing if they have to break their data entry task, for instance in order to answer a 'phone call. The most common way of doing this is to give each screen a name, reference number may be harder to remember unless they have some special significance. The title should always be placed in the same position. As mentioned in previous talks, consistent layout speeds searching tasks and may also help user to distinguish the context information from an ordinary field. This is an important point because some areas of a form will not be editable. If a form has been reached as part of a more elaborate dialogue, users must always be able to find out how they reached this point. One means of doing this is to concatenate the names of the screens that have been visited to reach this point: Account_Details:User_Record:User_Address.

The final component of a form based interface is known as the control panel. This includes information about how to navigate between and within a form. Between forms, users must be able to identify means of leaving the current screen. It is important that there should be some means for the user to exit without saving the information. This must be sufficiently different from the normal quit procedure that they will never get the two commands confused. Finally, it is important that users can correct the information that they have entered within particular fields.


The behaviour of forms can be classified as either advanced or simple: As mentioned in the introduction, forms are typically used in high volume data entry tasks. The users of these systems have a difficult and often tedious job. Staff turn-over is high and interface designers can have a significant impact upon job satisfaction. This is a classic area where you need user feedback in order to determine whether the layout and behaviour of your interface is sufficient. In closing the discussion of forms, remember that users should typically get some feedback on the results of their commands within approximately two seconds. If this does not occur then they may try corrective action: aborting the command; hitting return several times etc.

The basic difference between menus and forms is that menus offer the user a more constrained choice from a limited number of options. You cannot go into MacDonalds and order a Salad Nicoise, you have to pick something from the menu. This approach offers a number of advantages over command languages. The use of lists reminds users of their options. This can help them to direct their activities. Menus provide prompts that support short and long term memory. Users do not have to search through recollections of previous interaction to recall possible commands. Once they have remembered a command they do not have to continually remind themselves to ensure that it will not be forgotten again. Rather that trying to remember the command to print a file, users simply go to the file menu and browse its entries.

Menus have been widely used in a vast range of applications. The Prestel and Oracle systems have supported well over 300,000 screens. This approach is increasingly being used in embedded applications, ranging from televisions to data-line analysers. They provide good support for novice or intermittent users. The on-line representation of available commands helps users to view and explore their options during interaction. They can also be useful for experts if the command set is so large that additional support is required. In such circumstances, it may not be possible for any individual to remember all of the commands or objects that are available through their interface.

As with form based interfaces, menus have been widely applied and often in areas where text based interaction might have been more appropriate. For instance, I have frequently been forced to navigate through menus that offer between fifty and one hundred different options. In order to identify appropriate uses of this dialogue style it is important to review the strengths of menu based interaction:

The following list introduces some the weaknesses that affect menu-driven interaction:

Design Aims For Menus

The design aims for menus are as follows:

Appearance And Composition.

The rules that govern the layout and appearance of menus are very similar to those that apply for forms. We can divide their structure into a control panel, some context and the options. The guidelines for the options are similar to those that apply to command naming in textual interfaces. It is important that users can decode the names that are used in order to identify the commands and objects that they represent. Given finite screen space this may involve the abbreviation mechanisms that have been described earlier: vowel drop; fixed truncation; variable truncation etc. There is a tendency for designers to abandon the rules that guide the naming of commands elsewhere in the system. It is much more common to see `Select 1 for insert' in a menu than it is in a textual interface. This can be justified on the grounds that the prompt is there to remind users of the meaning associated with the number. However, this leads to a number of problems if multiple menus are used. The number 1 will have to be re-assigned in each successive structure. This destroys the user's ability to memorise and exploit the encodings used to label the items in a menu.

The individual items have to be grouped under labels. If there is only one menu then this is a trivial task. In more complex systems, it must be possible for users to infer that a particular item is available from a particular menu. Typically, this is done by placing the operations on an object under a menu that is labelled by the name of that object. For example, most interfaces have options such as Print, Copy, Open under the FILE menu. Alternatively, items may be collected under labels that reflect common operations. Various forms of SEARCH may be found under the same menu.

Just as items may be grouped under common labels, they are also grouped within a menu. For instance, creation operations such as Open and New are placed together in the File menu. Again, this supports users' navigation tasks within the menu structure. Finally, any destructive options, such as Delete should not be placed at the top of a menu. In mouse driven systems it is relatively simple to `drop-off' the label and select this option by mistake.

As with form based interaction, menus require some context information if novice users are to understand their location in an interface. Typically, this is done by providing each menu with a name. In simple examples, this may remain visible throughout the interaction. In more complex examples, the label that is used to access a sub-menu may remain visible until the user makes their selection. This approach can be repeated for sub-sub menus Format/Page/Size or Edit/Accounts/Marketing/1996. As in form based interfaces, a consistent layout should be exploited. Users must know where to find the title of a menu in case any confusion occurs. This is likely if similar menu options appear in several different places within an interface. For instance, Open may refer to a File menu, an Application listing or a Directory.

The control panel provides guidance on operating the menu. This may simply consist of a prompt (eg $>$ or Enter number:). Prompts must be unambiguous. It must be obvious to the user that they do not form part of the menu structure. Alternatively, control information may be omitted if users have sufficient expertise to understand the selection process.


Menus can behave in a number of different ways. For instance, they may simply support single selections. Alternatively, they may provide multiple check boxes where users can pick several of the available options. For instance, they might decide to check boxes that make a character italic, bold and underlined through the same menu.

If menus have many potential entries then they may be provided with scrolling mechanisms. For example, my Macintosh contains many different fonts. It is impossible to display the list on a single screen but I can use a scroll bar to move through them. This creates significant navigational problems, it is an extremely broad structure. It is, however, the only option when there is no meaningful way to group the items into sub-menus. If designers were sufficiently concerned about access times they might have grouped the fonts under alphabetic labels Names A-D, Names E-G...

Finally, it is vital that designers consider the ways in which users can exit from menu structures. In mouse-driven interfaces this may be done by simply releasing the button when it is outside the menu region. In text-based interfaces it is usual to provide a Cancel option. Designers also need to consider how to recover previous menu accesses when users finish their current selection. Should they be returned to the previous menu? Should they be returned to a `home' menu?

Graphical & Direct Manipulation Interfaces
This area could be the subject of its own course. The last few years have seen the effective development of thousands of different styles of graphical user interfaces. There are also a number of innovative new styles being developed in Japan and the United States. We will only have time to review some of the major issues in this section but it should provide you with an overview of your design options.

The term graphical user interface refers to any interactive system that uses pictures or images to communicate information. This is an extremely wide definition. It includes keyboard-based systems that only use graphics to present data. It also includes walk-up and use systems where users only interact by selecting portions of a graphical image. In contrast, the term direct manipulation refers to interfaces with the following characteristics.

This list was drawn up by Ben Shneiderman, the originator of the term. It provides a narrower definition than that for graphical user interfaces as a whole. Many Computer Aided Design and image generation systems use command languages to alter the graphics that are displayed. It can be argued that these are graphical systems but are not direct manipulation according to Shneiderman's definition.

The distinction between graphical and direct manipulation systems becomes important when we consider their uses. It is often claimed that graphical displays support novice users. However, if they are forced to operate the systems through complex command languages, as with CAD applications, then this need not be the case. In contrast, Shneiderman's previous definition of direct manipulation illustrates the benefits that this style of use is intended to offer for novice users. Actions are to be visible and hence may be easily accessed. They are rapid so feedback is quickly obtained about the success or failure of a command. They are reversible and so it is possible to undo previous mistakes. Finally, it is important to point out that some systems may be classed as direct manipulation without supporting the general use of graphical images. For instance, many spreadsheets enable users to directly interact with the objects of interest. Mice and other pointing devices can be used to select cell. Operations are visible and incremental. Values can be dragged and then dropped into their destination.

The following list reviews the strengths of graphical interaction:

The following list summarises the main weaknesses of graphical user interfaces:

Design Aims For Graphics & Direct Manipulation

The design aims for graphical user interfaces are as follows:

Appearance And Composition.

As mentioned in previous sections, there must be a natural mapping between the images on the display and the objects/operations of the users task. This is one of the reasons why the desktop metaphor has become so prevalent. There are dangers, however. In the Macintosh system, the designers could think of no graphical representation for ejecting the disk. As a compromise, this is done by dragging the disk icon to the waste-paper basket. The same operation that is used to delete a file. Many novice users find this slightly worrying at first; this occurs because the metaphor has had to be broken. Throwing a disk into a waste-paper basket has a completely different meaning in the desktop than it does in the real-world. This is an instance where, although it is possible to support the visual metaphor. The image of the disk looks like a disk. It is not possible to support the behavioural metaphor.

It is worth noting that there are four different types of commonly used icons in interactive systems:

It may be necessary to provide additional textual labels to support novices. The more arbitrary and abstract the symbol you use, the more training and explanation must be provided to support the user. It may be necessary to provide additional textual labels to support novices. There is some evidence to suggest, however, that once users have learned abstract icons they can search for them rapidly and can remember their meaning for longer than the more pictorial alternatives.

One of the most important guidelines for the composition of graphical displays is that designers should always avoid clutter. There are a number of reasons for this:

The increasingly bizarre range of icons in the tool palettes provided by direct manipulation graphics tools provide an illustration of these problems.

Graphical user interfaces perhaps present the greatest challenge to interface designers. The rage of possible options, even for simple objects, is absolutely vast. Previous pages have illustrated different approaches to a switch. Scroll bars present a similar range of options. There is the standard Macintosh version. Other types are used in the X window managers etc.

This variety makes graphical interface design a key area for user involvement in the development process. In many cases, the technical problems in implementing the various techniques are exactly the same. However, if users are accustomed to one style of interface there must be sound reasons if an alternative technique is to be implemented. Similarly, with direct manipulations systems it is vital that designers validate their choice of symbols. Interviewing techniques and questionnaires can be used to find out what the symbols mean for the end-users.

We have emphasised, however, that context plays an important role in determining the meaning of icons. Symbols in the context of a shirt collar may appear to be straight-forward and obvious. In the context of a user interface they may seem bizarre and confusing. The problem here is that in order to provide the necessary context, you may have to go ahead and develop an entirer graphical interface. Pencil and paper prototypes provide a low-cost alternative to this approach. As mentioned in previous exercises, this technique uses sketches of potential screen layouts. These are shown to users who can then provide feedback on the icon design and layout. If sufficient screens are drawn then designers can use `walk-throughs' by passing potential displays to the user as they select the various commands that are supported by the system. These techniques can be used in addition to structured interviews to provide a valuable focus for discussion. Always present alternative options so that users do not simply agree upon the first version that they see. Beware: there is a danger that users will become so focused upon the prototype that they prefer it to the implementation. Also, beware that you may not be able to implement the system that you prototype. Keep the software team `in the loop'.

Finally, it is important to remember that by implementing direct manipulation, graphical user interfaces you may restrict your user group. Blind users can exploit screen-readers to access textual interfaces. These systems do not currently work for full direct manipulation systems. If you use colour, you may make it difficult for users who are colour blind. As a rule of thumb, colour should be used sparingly and should provide redundant information. It should alway be possible for users to work out the information it represents from another source.

Behaviour Of Graphical User Interfaces

There are many different ways of interacting with graphical systems. Often you are restricted to a small subset of these by existing environments. If you are ever asked to buy a development environment then it's worth using the following checklist to see how many are supported as primitive operations on graphical objects. It can be costly and time-consuming to implement these by hand:


Why bother to evaluate?
There are a number of reasons that justify the use of evaluation techniques suring the development of interactive systems. They provide benefits for many of the parties involved in development:

The evaluation of user interfaces is closely linked to requirements elicitation. Like the techniques introduced earlier in the course it is vital that designers have a clear set of objectives in mind before they start to evaluate an interactive system. For example, evaluation techniques might be used to find out about:

The critical point about evaluation is that, like software testing, the longer you leave it, the worse it gets. If you avoid user contact during the design phase, then a large number of usability problems are liable to emerge when you do eventually deliver it to users. The design cycle shown in the previous slide uses interface evaluation to drive the development cycle. This may be a little extreme but it does emphasise the need for sustained contact with target users. It also illustrates the point that there is little good in evaluating an interface if we are unwilling to change the system as a result of the findings. By the `system', we do not necessarily mean the interface itself. The problems that are uncovered during evaluation can be corrected through training and documentation. Neither of these options are `cost free'.

When To Evaluate

Formative Evaluation

It is possible to stages in the evaluation of user interfaces. The first is formative because it helps to guide or form the decisions that must be made during the development of an interactive system. In a sense, the requirements elicitation techniques of previous sections were providing early formative evaluation.

If formative evaluation is to guide development then it must be conducted at regular intervals during the design cycle. This implies that low cost techniques should be used whenever possible. Pencil and paper prototypes provide a useful means of achieving this. Alternatively, there are a range of prototyping tools that can be used to provide feedback on potential screen layouts and dialogue structures.

Formative evaluation can be used to identify the difficulties that arise when users start to operate new systems. As mentioned, the introduction of new tools can change user tasks. This means that interface design is essentially an iterative task as designers get closer and closer to the final delivery of the full system.

Summative Evaluation

In contrast to formative evaluation, summative evaluation takes place at the end of the design cycle. It helps developers and clients to make the final judgements about the finished system. Whereas formative evaluation tends to be rather exploratory; summative evaluation is often focussed upon one or two major issues. In this sense, it is like the comparison between general software testing and more specific conformance testing. In the case of user interfaces designers will be anxious to demonstrate that their systems meets company and international standards as well as the full contractual requirements.

The bottom line for summative evaluation should be to demonstrate that people can actually use the system in their working setting. This necessarily involves acceptance testing. If sufficient formative evaluation has been performed then this may be a trivial task. If not then this becomes a critical stage in development. A friend of mine had to re-design an automated production system where the night-staff kept reverting to manual control. As a stop-gap the production manager had to move a camp-bed into the supervisors area to check that the system had not been switched off. Clearly, such problems indicate wider failings in the development process if they only emerge at the acceptance testing stage.

How To Evaluate
The following pages introduce the main approaches to evaluation.

Scenario-Based Evaluation

One of the biggest issues to be decided upon before using any evaluation techniques is `what do we evaluate'? Recent interest has focused upon the use of scenarios or sample traces of interaction to drive both the design and evaluation of interactive systems. This approach forces designers to identify key tasks in the requirements elicitation stage. As design progresses, these tasks are used to form a case book against which any potential interface is assessed. Evaluation continues by showing the user what it would be like to complete these standard tests using each of the interfaces. Typically, they are asked to comment on the proposed design in an informal way. This can be done by presenting them with sketches or simple mock-ups of the final system.

The benefit of scenarios is that different design options can be evaluated against a common test suite. Users are then in a good position to provide focussed feedback about the use of the system to perform critical tasks. Direct comparisons can be made between the alternatives designs. Scenarios also have the advantage that they help to identify and test hypotheses early in the development cycle. This technique can be used effectively with pencil and paper prototypes.

The problems with this approach are that it can focus designers' attention upon a small selection of tasks. Some application functionality may remain untested while users become all to familiar with a small set of examples. A further limitation is that it is difficult to derive hard empirical data from the use of scenario based techniques. In order to do this they must be used in conjunction with other approaches such as the more rigorous and formal experimental techniques.

Experimental Techniques

The main difference between the various approaches to interface evaluation is the degree to which designers must constrain the subject's working environment. In experimental techniques, there is an attempt to introduce the empirical techniques of scientific disciplines. It is, therefore, important to identify a hypothesis of argument to be tested. The next step in this approach is to devise an appropriate experimental method. Typically, this will involve focusing in upon some small portion of the final interface. Subjects will be asked to perform simple tasks that can be observed over time. In order to avoid any outside influences, tests will typically be conducted under laboratory conditions; away from telephones, faxes, other operators etc. The experimenter must not directly interact with the user in case they bias the results. The intention is to derive some measurable observations that can be analysed using statistical techniques. In order for this approach to be successful, it usually requires specialist skills in HCI development or experimental psychology.

There are some notable examples that have demonstrated the success of this approach. For instance, the cockpit instrumentation on Boeing 727s were blamed for numerous crashes. One of Boeing's employees, Conrad Kraft, conducted a series of laboratory simulations to determine the causes of these problems. He couldn't do tests on real aircraft and so he used a mixture of low-fidelity cardboard rigs and higher quality prototypes. In the laboratory he was able to demonstrate that pilots over-estimated their altitude in particular attitudes when flying over dark terrain. This led to widespread changes in the way that all commercial aircraft support ground proximity warning systems. Similar approaches have been used to demonstrate that thumb-wheel device reduce error rates in patient monitoring systems when compared to standard input devices such as mice and keyboards.

There are a number of limitations with the experimental approach to evaluation. For instance, by excluding distractions it is extremely likely that designers will create a false environment. This means that the results obtained in a lab setting may not be useful during `real-world interaction'. A related point is that by testing limited hypotheses, it may not be cost effective to perform this `classic' form of interface evaluation. Designers may miss many more important problems that are not affected by the more constrained issues which they do examine. Finally, these techniques are not useful if designers only require formative evaluation for half-formed hypotheses. It is little use attempting to gain measurable results if you are uncertain what it is that you are looking for.

Cooperative evaluation techniques.

Laboratory based evaluation techniques are useful in the final stages of summative evaluation. They can be used to demonstrate, for instance, that measurably less errors are made with the new system than with the old. In contrast, cooperative evaluation techniques (sometimes referred to as `think-aloud' evaluation) are particularly useful during the formative stages of design. They are less clearly hypothesis driven and are an extremely good means of eliciting user feedback on partial implementations.

The approach is extremely simple. The experimenter sits with the user while they work their way through a series of tasks. This can occur in the working context or in a quiet room away from the `shop-floor'. Designers can either use pencil and paper prototyping techniques or may use partial implementations of the final interface. The experimenter is free to talk to the user as they work on the tasks but it is obviously important that they should not be too much of a distraction. If the users requires help then the designer should offer it and note down the context in which the problem arose, for further reference. The main point about this exercise is that the subject should vocalise their thoughts as they work with the system. This can seem strange at first but users quickly adapt. It is important that records are kept of these observations, either by keeping notes or by recording the sessions for later analysis.

This low cost technique is exceptionally good for providing rough and ready feedback. Users feel directly involved in the development process. This often contrasts with the more experimental approaches where users feel constrained by the rules of testing. Most designers will already be using elements of this approach in their working practices. It is important to note, however, that vocalisations are encouraged, recorded and analysed in a rigorous manner. Cooperative evaluation should not simply be an ad hoc walk-through.

The limitations of cooperative evaluation are that it provides qualitative feedback and not the measurable results of empirical science. In other words, the process produces opinions and not numbers. Cooperative evaluation is extremely bad if designers are unaware of the political and other presures that might bias a user's responses. This is why so much time discussing different attitudes towards development.

Observational techniques.

There has been a sudden increase in interest in this area over the past three or four years. This has largely been in response to the growing realisation that the laboratory techniques of experimental psychology cannot easily be used to investigate unconstrained use of real-world systems. In its purest form the observational techniques of ethnomethodology suffer from exactly the opposite problems. They are so obsessed with the tasks of daily life that it is difficult to establish any hypothesis at all.

Ethnomethodology briefly requires that a neutral observer should enter the users' working lives in an unobtrusive manner. They should `go in' without any hypotheses and simply record what they see, although the recording process may itself bias results. The situation is similar to that of sociologists and ethnologists visiting remote tribes in order to observe their customs before they make contact with modern technology. The benefit of this approach is that it provides lots of useful feedback during an initial requirements analysis. In complex situations, it may be difficult to form hypotheses about users' tasks until designer shave a clear understanding of the working problems that face their users. This technique avoids the problems of alienation and irritation that can be created by the unthinking use of interviews and questionnaires.

The problems with this approach are that it requires a considerable amount of skill. To enter a working context, observe working practices and yet not affect users' tasks seems to be an impossible aim. At present, there seem to be no more pragmatic approaches to this work in the same way that cooperative evaluation developed from experimental evaluation. There have, however, been some well-documented successes for this approach. Lucy Suchman was able to produce important evidence about the design of photocopiers by simply recording the many problems that users had with them.

The Kitchen Sink Approach.

The final evaluation technique can be termed the `kitchen sink approach'. Here you explicitly recognise that interface design is a major priority for product development. Resources must be allocated in proportion to this commitment. Scenarios may be obtained from questionnaire and interviews. Informal cooperative evaluation techniques might be used for formative analysis, more structured laboratory experiments might be used to perform summative evaluation.