Copyright Chris Johnson, 1998. This course must NOT be used for any commercial purposes without the express permission of the author.

Chris Human Computer Interface Design
Using Java (Part 1)


Chris Johnson

This module introduces the design, implementation and evaluation of effective user interfaces. The practical component will emphasise, but not exclusively address, facilities available in the World Wide Web and much of the programming work will involve the use of Java and the Java libraries for windowing systems. The lectured material will be divided approximately 60:40 between design and implementation topics, with thorough coverage of both the underlying principles and the practical application of the techniques presented. It is assumed that you are familliar with the basics of Java programming (ie, you know how to compile and run a Java program).

Prerequisites: It is strongly recommended that you have taken a good introductory course in Java. It is also recommended that you have some knowledge of object oriented design. For our MScIT, this corresponds to the X stream in programming and the object oriented design module.


This module will:


By the end of the module the student should be able to:

Module Structure

The module will consiste of 20 lectures together with associated tutorials and laboratory sessions. The laboratory work will include exposure to the basic facilities for building web-based interactive systems including (if not covered in the Core) HTML, forms and simple use of CGI for interaction with a server. The main programming emphasis will be on the use of Java libraries for windowing systems.


By examination (70%) and coursework (30%). The assessed coursework will be a single exercise involving: the design of an interactive system; implementation of a prototype of that design; and evaluation of the prototype.


These notes supplement the briefer bullet points that structure the lecture material (see the Course Index ). The following book is recommended as a very general introduction to the problems of designing artefacts that support peoples' tasks: A more complete introduction to user interface design is provided by: More details about the Abstract Window Toolkit are provided by: The Geary book is a thorough introduction to AWT, however, much of it is devoted to a graphics package that extends AWT. This package is not discussed in this course. Geary's material provides a valuable next step in applying AWT after completing this course. The Culwin book is a more gentle introduction to AWT. Details about the JFC/Swing classes are provided by:

Introduction and Motivation

Human computer interaction is arguably the most important topic to be studied as part of any computing science course. Here are some of the reasons why it is important to study this topic:

This course will address many of the issues introduced in the previous paragraphs. However, the focus will be upon designing and constructing user interfaces rather than on the social issues surrounding those systems. More information about these issues can be found in the social aspects of computing course.


This area is full of jargon and acronyms. Many of these terms differ between Europe and the United States.

The previous paragraphs provide a number of pointers to futher information about the varying "traditions" that have combined to support user interface design. Although our focus in this course is upon the design and implementation of interactive systems it is important not to lose sight of the wider issues about working practices and working environments that are being addressed by human factors experts and ergonomists. More details are provided about these issues in the Interactive Systems Design course.

The HCI Lifecycle
So what is HCI? One way of thinking about the subject is that is provides a series of techniques that are intended to focus upon the users needs at each stage of development. This contrasts from traditional approaches to software engineering where the user may only be considered at the beginning of a project, to establish initial requirements and at the end, to perform final product texting. The "user centred" approach advocated by HCI would, instead, encourage user involvement at all stages of development. For instance, prototypes or partial implementations might shown to potential users to gather their feedback as a system is built. This helps to ensure that designers find mistakes early in development. Otherwise, they might not be discovered before the system is built and delivered; when all of the development resources might have been used up.

The following paragraphs briefly describe the main stages of the HCI development lifecyle. They are intended to indicate the sorts of activities that interface designers might conduct to ensure a user centred approach to systems development:

It is important to understand that the stages described in previous paragraphs do NOT provide a straightforward route from requirements through to installation and maintenance. Each of the stages may force revisions to previous activities. For example, we have already argued that users find it difficult to explain what a system ought to do. As a result, many requirements only emerge after an initial prototype has been shown to the user. This implies that designers should make the time between requirements analysis and design as short as possible do that they can quickly obtain user feedback about their initial ideas. This is a central concept behind what has become known as RAD (Rapid Application Development).

Guidelines and Standards

It is important to identify the mechanisms or techniques that can be used to introduce HCI into the software development lifecycle. The pragmatics of the software industry mean that many companies cannot affoard to emply full time usability consultants. As a result, most commercial organisations have introduced HCI through the use of guidelines. These are lists of rules about when and where to do things, or not to do things, in an interface. For instance, a guideline might be not to have more than ten items in a menu. Another guideline might be to avoid clutter on a graphical user interface. This approach is declining as more and more organisations employ teams of human factors specialists. It is, however, important to have some understanding of what these guidelines are like.

The most famous set of guidelines were developed by Smith and Mosier on behalf of the Mitre Corporation. Unsurprisingly, these are know as the Smith and Mosier guidelines. They now include several thousand rules and you really need a hypertext tool to use them. They have been adapted for use by the US military and by NASA. An example of one of Smith and Mosier's guidelines is:

1.6.2 DATA ENTRY: Graphics - Drawing
When users must create symmetric graphic elements, provide a means for specifying 
a reflection (mirror image) of existing elements.
Several companies have also developed their own style guides. These are similar to the Smith and Mosier guidelines because they simply list do's and dont's for interface design. They are slightly different from Smith and Mosier because there are commercial motivations behind them, they are not simply intended to enhance the usability of the interface. Apple's guidelines help you to produce a system that looks and feels like other Apple products. Microsoft's Window's guidelines help you to produce a system that looks and feels like a Window's products. The point here is that once your workforce have become accustomed to one style of interface then you will be encouraged to buy other systems that are consistent with the first one. In other words, you will buy more Microsoft products, more Apple products and so on.

This proprietorial approach is less evident in the guidelines that have been produced specifically for the web. The philosophy of the web that stresses the importance of platform independence implies that designers must produce pages that support their users irrespective of whether they are being downloaded onto a Mac, PC or Unix system:

There are, however, further problems. Guidelines and style guides help you to identify good and bad options for your interface. They also restrict the range of techniqus that you can use and still 'conform' to a particular style. Further problems arise because guidelines can be very difficult to apply. In many ways, they are only really as good as the person who is using them. This is a critical point because many companies view guidelines as a panacea. The way to improve an interface is not just to draft a set of rules about how many menu items to use of, what colours make good backgrounds etc. Users' tasks and basic psychological characteristics MUST be taken into account. Unless you understand these factors then guidelines have no meaning. For example, the Apple guidelines state that:
``People rely on the standard Macintosh user interface for consistency.   
Don't copy other platforms' user interface elements or behaviours 
in the Macintosh because they may confuce users who aren't familliar with them.''
This simple guidelines glosses over all of the important points about the differences between novices and experts. Using inconsistent features removes an expert's skills in using the previous system. Unless the programmer/designer understands such additional justifications then the true importance of the guideline may be lost.

Apple recognise some of the problems in using guidelines whent they state that:

``There are times when the standard user interface doesn't cover the needs of your 
application.   This is true in the following situations: you are creating a 
new feature for which no element or behaviour exists.   In this case you can
extend the Macintosh user interface in a prescribed way;  An existing element
does almost everything you need it to, but a little modification that improves
its function makes the difference to your application...''
The Apple Guidelines, go on to present a number of more generic guidelines, or principles, that can then be used to guide these novel interfaces.

The problem with guidelines is that you need a large number of rules in order to cover all of the possible interface problems that might crop up. Also, it's difficult to know what to do when you have to break a guideline. For instance, what do you do if you have a menu of eleven items? More recently, companies have been concerned to document the steps that they take to elicit the users requirements and to test the system. This has been largely brought about by the movement to conform with the International Standards Organisations ISO9000 standard. This sets out approved procedures for software development. Many software ourchasers now expect their suppliers to be 'ISO9000 conformant'.

For the last decade or so, there as been a move to introduce standrads into interface design. Initially, these focussed upon when and where to use particular pieces of hardware. For example, Systems Concepts reviewed the British Standard's Institute's standards in this area as follows:

BS EN 29241-1:1993 (ISO 9241) Part 1 General Introduction
The purpose of this standard is to introduce the multi-part standard for the 
ergonomic requirements for the use of visual display terminals for office
tasks and explain some of the basic underlying principles. It describes the 
basis of the user performance approach and gives an overview of all parts 
currently published and of the anticipated content of those in preparation. 
It then provides some guidance on how to use the standard and describes how 
conformance to parts of BS EN 29241 should be reported.
Not exactly gripping stuff but if you are interested in recent work in this area then take a look at System Concept's review of usability standards.

Norman's Models of Interaction

Donald Norman is one of the leading researchers within the field of human computer interaction. One of his most important ideas is that human-computer interaction is based around two gulfs that separate the user from their system. Norman's model is illustrated in this diagram.

The importance of Norman's model is that it focusses the designer's attention upon the user's perspective during interaction. They have to map their goals and intentions into the language supported by the system. In this view, the Java implementation techniques that we are about to discuss are of secondary importance to the design skills that designers must exploit when constructing an interface. It doesn't matter what sophisticated programming techniques are used, if people cannot work out what inut to provide or if they cannot understand the displays provided by a system then the interface has failed.

Interface Development in Java

Douglas Kramer's Java White Paper describes Java in the following terms:

The computer world currently has many platforms, among them Microsoft Windows, Macintosh, OS/2, UNIX® and NetWare®; software must be compiled separately to run on each platform. The binary file for an application that runs on one platform cannot run on another platform, because the binary file is machine-specific.

The Java Platform is a new software platform for delivering and running highly interactive, dynamic, and secure applets and applications on networked computer systems. But what sets the Java Platform apart is that it sits on top of these other platforms, and compiles to bytecodes, which are not specific to any physical machine, but are machine instructions for a virtual machine. A program written in the Java Language compiles to a bytecode file that can run wherever the Java Platform is present, on any underlying operating system. In other words, the same exact file can run on any operating system that is running the Java Platform. This portability is possible because at the core of the Java Platform is the Java Virtual Machine.

While each underlying platform has its own implementation of the Java Virtual Machine, there is only one virtual machine specification. Because of this, the Java Platform can provide a standard, uniform programming interface to applets and applications on any hardware. The Java Platform is therefore ideal for the Internet, where one program should be capable of running on any computer in the world. The Java Platform is designed to provide this "Write Once, Run Anywhere"SM capability.

This idea that you should be able to write a Java program and run it on any number of architectures poses particular problems for interactive systems because the look and feel of these systems can be very different. For example, here is the interface to Word running under WindowsNT and here is the interface to the same application running on a Macintosh. If you look at them side-by-side in two different browser windows you can spot a large number of differences. For example, the NT version uses the Control key (CTRL) to access keyboard accelerators; these are shown on the right on menu options and allow users to select items by pressing keys rather than forcing them to move their hand from the keyboard to the mouse. In contrast, the Macintosh interface uses the Apple key - shown by a clover shape of interlocking circles around a square. There are further differences in the presentation of the windows that enclose the applications. These differences do not occur by chance; the Apple and Microsoft operating systems were designed by different people and they were not intended to look and feel the same. Therefore, if Java is to provide a write-once, run anywhere, approach to user interface implementation then its run-time system must translate particular interface components into the particular look and feel of the platform that the program is running on. This is illustrated by three buttons that were generated by the same Java code on a Unix machine, a PC running NT and a Macintosh. Notice that the PC and Unix/Motif versions look almost identical but that they are both very different from that of the Macintosh.

Java's promise of adapting a user interface to the platform that it is running on is a very good thing, in principle. When a Java program is run on a Macintosh it will provide its users with an interface that looks and feels like a Macintosh interface; this is important if users are to transfer the skills they have built up using other Mac applications to the new interface that you have written. In a similar way, one would not expect to find an interface designed for a Macintosh if you were working on a PC.

The Abstract Window Toolkit (AWT)
The Abstract Window Toolkit (AWT) forms part of the Java Development Kit (JDK). There is an AWT home page It is probably the most widely used means of constructing graphical user interfaces in Java, although, significant numbers of people are using Swing (see later). The AWT environment will provide the focus for the rest of this course. An important benefit of AWT is that it is part of the standard Java distribution and so programmers can assume that browsers and Java Virtual Machines will support its components. However, there are now several versions of the AWT environment (1.0, 1.1 and more recently enhancements in the Java 2 SDK v1.3). Browsers that support AWT programs up to version 1.0 will not support all of the features of 1.1. This course will focus on version 1.1 but will provide examples of 1.0.

This diagram gives you some idea of the way in which AWT relates to particular architextures. One of the key points about this diagram is that AWT uses the existing window managers that have been written for particular platforms. Window managers are programs that are responsible for updating the screen. They translate calls from application programs into the low-level instructions necessary to draw icons, buttons etc onto the screen. Window managers also pass on user input to application programs as it is received from the operating system. Window managers are platform specific because they must deal with relatively low level operating systems features; the facilties provided by MacOS will be different from those provided by UNIX and so on.

AWT can, therefore, be seen as a buffer between your code and the particular facilities provided by the window managers on a number of different platforms. This is important because you do not need to learn how to translate your user interface code into the calls provided by many different window managers. However, it is possible in Java to directly access the functions of a particular windowing system without going through AWT or similar interfaces. If you do this then there is no guarantee that you program will run on other platforms that do not share the same features of your original window manager.

Just to give you some idea of what we are talking about, here is a very simple Java applet that makes use of AWT.

 * Copyright (c) 1995-1997 Sun Microsystems, Inc. All Rights Reserved.
 * Permission to use, copy, modify, and distribute this software
 * and its documentation for NON-COMMERCIAL purposes and without
 * fee is hereby granted provided that this copyright notice
 * appears in all copies. Please refer to the file "copyright.html"
 * for further important copyright and licensing information.
 * 1.1 version.

import java.awt.*;		/* Notice - this links to AWT classes */
import java.awt.event.ActionListener;
import java.awt.event.ActionEvent;
import java.applet.Applet;

public class ButtonDemo extends Applet
                        implements ActionListener {

Button b1, b2, b3;		/* AWT provides a Button class */
static final String DISABLE = "disable";
static final String ENABLE = "enable";

    public void init() {
        b1 = new Button();
        b1.setLabel("Disable middle button");

        b2 = new Button("Middle button");

        b3 = new Button("Enable middle button");

        //Listen for actions on buttons 1 and 3.

        //Add Components to the Applet, using the default FlowLayout. 

    public void actionPerformed(ActionEvent e) {
        String command = e.getActionCommand();
        if (command == DISABLE) { //They clicked "Disable middle button"
        } else { //They clicked "Enable middle button"

Here is the AWT 1.0 code.

Although we will be focussing on AWT, this is not the end of the story. This is an extremely dynamic area. Many commercial and academic groups are developing systems that reduce the complexity of constructing graphical user interfaces. Most of these systems are built on top of environments that look very similar to AWT and so it is relatively easy to transfer skills gained in AWT to these new systems. The following section provides a brief overview of one of extension to AWT.

Swing provides a set of classes that extend those provided by AWT. It is NOT intended to replace AWT. Both provide object-oriented classes to help programmers write graphical user interfaces for their Java programs. AWT applications will still run if you later decide to introduce elements from the Swing component set. More details about the relationship between AWT and Swing are provided in this article.

One of the most important differences between Swing and AWT is that Swing components don't borrow any native code from the platforms on which they run. In order to understand this point, it is important to explain something that was missing in the previous diagram. In this idealised architecture, AWT calls are mapped directly to the native window managers. Swing does not use this intermediate stage. Instead, Swing provides its own components that are written from scratch. One intention behind this is to develop a cross-platform style of user interface. This new `look and feel' are described in the following document. If this catches on then it will be more important for programmers to be consistent with the Java look and feel than it will be for them to be consistent with the Macintosh or Windows style guides, mentioned above. Having said this, Swing also provides platform specific facilities if programmers want to retain the look and feel of an existing interface.

This course focusses on AWT partly because AWT provides the foundations for Swing. A further justification is that the platform independent look and feel is less widely used than the platform-specific approach initially adopted by AWT. Finally, if you have mastered the AWT classes then it is relatively easy to pick up the Swing libraries.


In the beginning there was OLE (Object linking and embedding). OLE allows one application to make use of services provided by another. For instance, a desk-top publishing system might send some text to a word processor or a picture to a bitmap editor using OLE. This was generalised to an architecture for distributed programming, Microsoft's Component Object Model (COM). COM can be seen as Microsoft's answer to Java.

There are some important differences between COM and Java. For instance, it relies upon a radically different client-server architecture, described here. The Microsoft virtual machine automatically maps any Java object into a COM object and vice versa so these differences may not be the disadvantage that they might at first appear. More information about the general approach can be found at the Microsoft web site.

From an HCI perspective there are important differences between Java and ActiveX. Because Java depends on the Java virtual machine to run on particular platforms performance is not spectacular. In contrast, ActiveX is really based on the Win32 (ie Windows) architecture. This means that it doesnt have the extra level of processing implied by the Java virtual machine and so will, typically, run faster. A further difference is that because ActiveX relies upon native features of the Windows environment, it is also possible to access native features of that platform - including file input and output. This is much harder to achieve in existing implementations of the Java security model. Finally, many Windows tools and applications can make use of ActiveX controls, so they aren't confined to your browser.

ActiveX forms part of this more general COM architecture. In the Internet implementation, ActiveX includes controls for incremental rendering (ie slowly piecing together an image) and for code signing (ie, including security features). It remains to be seen how the battle between COM and Java will be resolved. This course focusses on Java and so our emphasis is on AWT rather than ActiveX controls.

There is an excellent introduction to ActiveX and COM on the Byte website

VRML, Java3D and beyond...

This course focuses on "conventional" user interfaces development techniques. These are employed in the vast majority of interactive systems. However, there is a growing number of systems that exploit desktop virtual reality techniques. These interfaces provide their users with the impression of interacting in three dimensions without the use of additional hardware; gloves, helmets etc.

The Virtual Reality Markup Language (VRML) is the file format standard for 3D multimedia on the Internet. Its developers see it as a natural progression from the two dimensional formats of HTML. VRML is a platform independent language for composing 3D models from cones, spheres and cubes. These primitives are combined to create more complex scenes such as those shown in this image of Glasgow University's Hunterian Museum. With the advent of VRML 2.0 it is possible to generate and animate scenes that contain links to a wide variety of other information sources including videos, databases and other web pages.

In order to view VRML models you need to have access to a browser such as Silicon Graphics' COSMO player. There is a wide variety of tools to help you generate VRML worlds. It is also possible to construct VRML worlds by hand. VRML files consist of collections of objects, called nodes:

For example, the following VRML code describes a tree:
#VRML V1.0 ascii                  the header, required as the first 
                                  line of every VRML file.
Separator {                       start of grouping node
  Texture2                        property node within the group
{filename "bark.jpg"}             the image file to be used as a texture
  Cylinder {                      shape node, which will be 
     parts   ALL                  modified by the texture2 node
  radius  0.5
  height   4

One of the the problems with VRML is that it provides limited facilities for animating the three dimensional worlds that you can create (using its own scripting language). As a result, programmers are often forced to use a link between Java and a particular browser (usually Cosmo) in order to update the inmformation presented to the user. Java3D takes an alternative approach. instead of starting with a modelling tool and linking to a programming language, this approach starts with Java and then extends it with facilties for rendering three dimensional objects on a screen.

Java3D is implemented on top of JDK1.2 and the lower level, platform independent graphics calls supported by OpenGL. It is designed for fast/parallel execution. This later point is important because the appearance of hundreds of individual objects may have to be updated as users move through complex scenes. As far as implementation is concerned, development progresses by constructing a scene graph that describes all of the objects that are to be repesented in the user interface. Java 3D provides an assortment of classes for making different types of 3D content:

Components of a scene graph are derived from base classes: Here is an excellent introduction to using Java3D.

As with AWT, however, the technical details of interface development in VRML and Java3D are less important than DESIGNING a system that satisfies user requirements. Recent user interfaces that exploit desktopVR have had very mixed success. Many are simply gratuitous applications of flashy technology and are quickly discarded for more conventional approaches. Here is a paper describing some of the design and evaluation problems. Until some of these problems can be resolved, the more conventional interfaces described in this course will continue to dominate the home and business markets.

An Introduction to Evaluation

Why bother to evaluate?
There are a number of reasons that justify the use of evaluation techniques suring the development of interactive systems. They provide benefits for many of the parties involved in development:

The evaluation of user interfaces is closely linked to requirements elicitation. Like the techniques introduced earlier in the course it is vital that designers have a clear set of objectives in mind before they start to evaluate an interactive system. For example, evaluation techniques might be used to find out about:

The critical point about evaluation is that, like software testing, the longer you leave it, the worse it gets. If you avoid user contact during the design phase, then a large number of usability problems are liable to emerge when you do eventually deliver it to users. The design cycle shown in the previous slide uses interface evaluation to drive the development cycle. This may be a little extreme but it does emphasise the need for sustained contact with target users. It also illustrates the point that there is little good in evaluating an interface if we are unwilling to change the system as a result of the findings. By the `system', we do not necessarily mean the interface itself. The problems that are uncovered during evaluation can be corrected through training and documentation. Neither of these options are `cost free'.

When To Evaluate

Formative Evaluation

It is possible to stages in the evaluation of user interfaces. The first is formative because it helps to guide or form the decisions that must be made during the development of an interactive system. In a sense, the requirements elicitation techniques of previous sections were providing early formative evaluation.

If formative evaluation is to guide development then it must be conducted at regular intervals during the design cycle. This implies that low cost techniques should be used whenever possible. Pencil and paper prototypes provide a useful means of achieving this. Alternatively, there are a range of prototyping tools that can be used to provide feedback on potential screen layouts and dialogue structures.

Formative evaluation can be used to identify the difficulties that arise when users start to operate new systems. As mentioned, the introduction of new tools can change user tasks. This means that interface design is essentially an iterative task as designers get closer and closer to the final delivery of the full system.

Summative Evaluation

In contrast to formative evaluation, summative evaluation takes place at the end of the design cycle. It helps developers and clients to make the final judgements about the finished system. Whereas formative evaluation tends to be rather exploratory; summative evaluation is often focussed upon one or two major issues. In this sense, it is like the comparison between general software testing and more specific conformance testing. In the case of user interfaces designers will be anxious to demonstrate that their systems meets company and international standards as well as the full contractual requirements.

The bottom line for summative evaluation should be to demonstrate that people can actually use the system in their working setting. This necessarily involves acceptance testing. If sufficient formative evaluation has been performed then this may be a trivial task. If not then this becomes a critical stage in development. A friend of mine had to re-design an automated production system where the night-staff kept reverting to manual control. As a stop-gap the production manager had to move a camp-bed into the supervisors area to check that the system had not been switched off. Clearly, such problems indicate wider failings in the development process if they only emerge at the acceptance testing stage.

How To Evaluate
The following pages introduce the main approaches to evaluation.

Scenario-Based Evaluation

One of the biggest issues to be decided upon before using any evaluation techniques is `what do we evaluate'? Recent interest has focused upon the use of scenarios or sample traces of interaction to drive both the design and evaluation of interactive systems. This approach forces designers to identify key tasks in the requirements elicitation stage. As design progresses, these tasks are used to form a case book against which any potential interface is assessed. Evaluation continues by showing the user what it would be like to complete these standard tests using each of the interfaces. Typically, they are asked to comment on the proposed design in an informal way. This can be done by presenting them with sketches or simple mock-ups of the final system.

The benefit of scenarios is that different design options can be evaluated against a common test suite. Users are then in a good position to provide focussed feedback about the use of the system to perform critical tasks. Direct comparisons can be made between the alternatives designs. Scenarios also have the advantage that they help to identify and test hypotheses early in the development cycle. This technique can be used effectively with pencil and paper prototypes.

The problems with this approach are that it can focus designers' attention upon a small selection of tasks. Some application functionality may remain untested while users become all to familiar with a small set of examples. A further limitation is that it is difficult to derive hard empirical data from the use of scenario based techniques. In order to do this they must be used in conjunction with other approaches such as the more rigorous and formal experimental techniques.

Experimental Techniques

The main difference between the various approaches to interface evaluation is the degree to which designers must constrain the subject's working environment. In experimental techniques, there is an attempt to introduce the empirical techniques of scientific disciplines. It is, therefore, important to identify a hypothesis of argument to be tested. The next step in this approach is to devise an appropriate experimental method. Typically, this will involve focusing in upon some small portion of the final interface. Subjects will be asked to perform simple tasks that can be observed over time. In order to avoid any outside influences, tests will typically be conducted under laboratory conditions; away from telephones, faxes, other operators etc. The experimenter must not directly interact with the user in case they bias the results. The intention is to derive some measurable observations that can be analysed using statistical techniques. In order for this approach to be successful, it usually requires specialist skills in HCI development or experimental psychology.

There are some notable examples that have demonstrated the success of this approach. For instance, the cockpit instrumentation on Boeing 727s were blamed for numerous crashes. One of Boeing's employees, Conrad Kraft, conducted a series of laboratory simulations to determine the causes of these problems. He couldn't do tests on real aircraft and so he used a mixture of low-fidelity cardboard rigs and higher quality prototypes. In the laboratory he was able to demonstrate that pilots over-estimated their altitude in particular attitudes when flying over dark terrain. This led to widespread changes in the way that all commercial aircraft support ground proximity warning systems. Similar approaches have been used to demonstrate that thumb-wheel device reduce error rates in patient monitoring systems when compared to standard input devices such as mice and keyboards.

There are a number of limitations with the experimental approach to evaluation. For instance, by excluding distractions it is extremely likely that designers will create a false environment. This means that the results obtained in a lab setting may not be useful during `real-world interaction'. A related point is that by testing limited hypotheses, it may not be cost effective to perform this `classic' form of interface evaluation. Designers may miss many more important problems that are not affected by the more constrained issues which they do examine. Finally, these techniques are not useful if designers only require formative evaluation for half-formed hypotheses. It is little use attempting to gain measurable results if you are uncertain what it is that you are looking for.

Cooperative evaluation techniques.

Laboratory based evaluation techniques are useful in the final stages of summative evaluation. They can be used to demonstrate, for instance, that measurably less errors are made with the new system than with the old. In contrast, cooperative evaluation techniques (sometimes referred to as `think-aloud' evaluation) are particularly useful during the formative stages of design. They are less clearly hypothesis driven and are an extremely good means of eliciting user feedback on partial implementations.

The approach is extremely simple. The experimenter sits with the user while they work their way through a series of tasks. This can occur in the working context or in a quiet room away from the `shop-floor'. Designers can either use pencil and paper prototyping techniques or may use partial implementations of the final interface. The experimenter is free to talk to the user as they work on the tasks but it is obviously important that they should not be too much of a distraction. If the users requires help then the designer should offer it and note down the context in which the problem arose, for further reference. The main point about this exercise is that the subject should vocalise their thoughts as they work with the system. This can seem strange at first but users quickly adapt. It is important that records are kept of these observations, either by keeping notes or by recording the sessions for later analysis.

This low cost technique is exceptionally good for providing rough and ready feedback. Users feel directly involved in the development process. This often contrasts with the more experimental approaches where users feel constrained by the rules of testing. Most designers will already be using elements of this approach in their working practices. It is important to note, however, that vocalisations are encouraged, recorded and analysed in a rigorous manner. Cooperative evaluation should not simply be an ad hoc walk-through.

The limitations of cooperative evaluation are that it provides qualitative feedback and not the measurable results of empirical science. In other words, the process produces opinions and not numbers. Cooperative evaluation is extremely bad if designers are unaware of the political and other presures that might bias a user's responses. This is why so much time discussing different attitudes towards development.

Observational techniques.

There has been a sudden increase in interest in this area over the past three or four years. This has largely been in response to the growing realisation that the laboratory techniques of experimental psychology cannot easily be used to investigate unconstrained use of real-world systems. In its purest form the observational techniques of ethnomethodology suffer from exactly the opposite problems. They are so obsessed with the tasks of daily life that it is difficult to establish any hypothesis at all.

Ethnomethodology briefly requires that a neutral observer should enter the users' working lives in an unobtrusive manner. They should `go in' without any hypotheses and simply record what they see, although the recording process may itself bias results. The situation is similar to that of sociologists and ethnologists visiting remote tribes in order to observe their customs before they make contact with modern technology. The benefit of this approach is that it provides lots of useful feedback during an initial requirements analysis. In complex situations, it may be difficult to form hypotheses about users' tasks until designer shave a clear understanding of the working problems that face their users. This technique avoids the problems of alienation and irritation that can be created by the unthinking use of interviews and questionnaires.

The problems with this approach are that it requires a considerable amount of skill. To enter a working context, observe working practices and yet not affect users' tasks seems to be an impossible aim. At present, there seem to be no more pragmatic approaches to this work in the same way that cooperative evaluation developed from experimental evaluation. There have, however, been some well-documented successes for this approach. Lucy Suchman was able to produce important evidence about the design of photocopiers by simply recording the many problems that users had with them.

The Kitchen Sink Approach.

The final evaluation technique can be termed the `kitchen sink approach'. Here you explicitly recognise that interface design is a major priority for product development. Resources must be allocated in proportion to this commitment. Scenarios may be obtained from questionnaire and interviews. Informal cooperative evaluation techniques might be used for formative analysis, more structured laboratory experiments might be used to perform summative evaluation.