This draft editorial refers to a number of papers that will appear in a special edition of Elsevier's Interacting with Computers. It is deliberately intended to be provocative - I have included arguments that I do not necessarily agree with but which I believe that the human factors community must urgently address. (My work is also open to most of these criticisms).

Why Human Error Analysis Fails to Help Systems Development

Chris Johnson

Department of Computing Science,

University of Glasgow,

Glasgow, G12 8QQ, UK.

Tel: +44 (0141) 330 6053

Fax: +44 (0141) 330 4913

http://www.dcs.gla.ac.uk/~johnson

EMail: johnson@dcs.gla.ac.uk

Until the 1980s, human reliability analysis focused upon individual erroneous actions. More recently, attention has shifted to the managerial and organizational contexts that create the latent conditions for such failures. Unfortunately, these developments have had little impact upon many industries. The problems of technology transfer are less due to commercial neglect than to the failure of human-factors' research to seriously consider the problems of systems development. For example, most error-modeling techniques are poorly documented. In consequence, errors are likely to be made when designers apply error modeling techniques. There are further ironies. Many of these techniques depend entirely upon the skill and intuition of human factors' experts. The relatively poor up-take of many professional accreditation schemes prevents companies from assessing the quality both of those experts and their advice. Until these practical problems are addressed, increasingly esoteric models of human and organizational failure will be of little practical benefit.

Over the last five years, a number of researchers have become increasingly concerned to support technology transfer between human error modeling and constructive systems development (Johnson and Leveson, 1997). As a result, workshops were staged in Glasgow (1997), Seattle (1998) and Liège (1999). This special edition presents a collection of papers from these meetings.

Keywords: human error; system failure; management weakness.

  1. INTRODUCTION

The 1970s and 1980s focussed public attention upon the human contribution to system failure. Flixborough (1974), Seveso (1976), Three Mile Island (1979), Bhopal (1984), Chernobyl (1986) increased awareness that human intervention could cause or exacerbate major accidents. In consequence, research centered on the cognitive, perceptual and physiological demands that new generations of automated systems placed upon their operators. Studies were conducted into the changing patterns of workload as active intervention was replaced by supervisory modes of control (Wickens, 1984). Human error models and error taxonomies were developed to categorize and explain operator failure during major accidents (Reason, 1990).

Recent years have changed our understanding of human error. Investigations into the Challenger (1986), Piper Alpha (1988), Hillsborough Stadium (1989) and Narita (1994) accidents all focus upon managerial factors rather than the individual's contribution through erroneous actions. This change in emphasis has been mirrored by the rise of research into ecological influences on human error (Vicente and Rasmussen, 1990). The focus has moved from individual performance to the organizational environment that creates latent opportunities for failure (Reason, 1997).

Unfortunately, changes in our understanding of human error have had relatively little impact upon commercial practice (van Vuuren, 1998). Regulatory bodies, such as the US Federal Aviation Authority and the UK Health and Safety Executive, continue to launch initiatives that are intended to increase industrial awareness about the organizational factors that lead to major failures. Recent accidents have shown that many industries must still learn the more fundamental lessons of human cognition, physiology and perception (AAIB, 1996).

1.1 Three Myths of Human Error

A number of factors explain why academic research and changing regulatory attitudes have had such a marginal impact. In particular, it is possible to identify three myths that are often cited as barriers to the practical application of human error analysis:

    1. human error is inevitable. In this view, users will eventually defeat whatever safeguards and measures are put in place to protect them and their environment. Of course, recent work on the managerial and organizational causes of accidents suggests that this attitude itself contributes to latent failures;
    2. human error cannot be predicted. In particular, it is difficult to anticipate the many ways in which inattention and fatigue jeopardize safety. However, recent work has shown that it is possible to predict and remove many of the local conditions that create the opportunity for inattention and fatigue to have disastrous consequences (Reason, 1997)
    3. human error is too costly to guard against. In this view, market forces prevent companies from employing the analysis and prevention techniques that reduce the human contribution to major accidents. This argument is, typically, countered by the costs of major failures, for example the Exxon Valdez incurred a loss of some £3.5 billion dollars.

There are other more cogent reasons than those cited above. In particular, it can be argued that improvements in our understanding of human error have not been accompanied by the means of applying these new insights. Rasmussen (1986), de Keyser (1990) and Reason (1997) have developed detailed taxonomies and frameworks that provide valuable insights into the causes of operator failure. However, there are no well-established techniques for transferring the products of their research into the design and operation of safety-critical, interactive systems.

2. BARRIERS TO THE USE OF HUMAN ERROR ANALYSIS

The greatest advances in the understanding of human error have been in the development of frameworks and models that describe the cognitive processes which lead to failure. For instance, Rasmussen's (1986) Knowledge, Skills and Rules framework has been continually refined over recent years. This development has continued to a point where it not only describes levels of human performance but has also been used to characterize operator interaction with different modes of control. It forms the basis of Reason's (1997) GEMS taxonomy of human error. It has even been used to analyze operator responses to different training and learning regimes. However, there is relatively little published advice about how to apply human error analysis within complex working environments.

2.1 Lack of Agreed Standards and Methods

The lack of practical advice about the application of human error analysis has a number of important consequences. The first is that different human-factors experts can apply the same techniques to the same situations and end up with radically different conclusions (Busse and Johnson, 1998). As a result, systems engineers can quickly become disillusioned about the quality of advice that they receive. This disillusionment is fed by the lack of professional development schemes within the human factors community. Given that there are few manuals to inform the application of these techniques, engineers are forced to rely upon the skills of the analyst. Professional bodies, such as the Ergonomics Society, focus almost exclusively on degree courses. There are, therefore, few commonly agreed standards for assessing the continuing professional competence of human factors analysts.

2.2 A Reliance on Subjective Interpretation

The papers in this special edition reflect the concerns mentioned above. For example, Marie-Odile Bes shows how error analysis can be used to inform the evaluation of complex interactive systems. Reason's GEMS taxonomy is used to analyze planning activities in air traffic control simulations. At several points in her analysis she argues that there are great problems in mapping between high level frameworks and the complex plethora of interactions that are observed in the simulation. She goes on to argue that users' anticipation at different levels of task management can reduce the likelihood of error. From my perspective, this detailed finding is less interesting than her insight into the problems that even expert analysts face when they use high-level error models to interpret complex patterns of situated interaction.

2.3 Poor Support for "Run-time" Predictions

A number of further problems limit the utility of error analysis. Previous research has developed convincing explanations for human failure in previous accidents. It has not, however, provided predictive techniques to avoid human failure in future accidents. Many books in this area relegate error prediction to a footnote or appendix. It ought to be a central issue. Maria Virvou's paper reflects this concern. She shows how models of user error can be integrated into intelligent help systems. Although this work focuses on interaction with an operating system, it is highly relevant for safety-critical user interfaces. The focus on user intentions and beliefs indicates at least one means of exploiting human error models to not only explain but also predict future errors. Previous work in this area has been criticized for the poverty of the error models that are used to generate predictions. However, without research into error prediction there is little prospect that recent work in the field of human error modeling will ever make a significant contribution to systems development.

2.4 Poor Support for "Design-time" Predictions

Maria Virvou's paper provides an example of run-time error prediction. Her system anticipates user failure during interaction. Off-line error prediction performs a similar function during the analysis and design of interactive systems. Frédéric Vanderhaegen's paper explores this application of error models. His contribution is to place the prediction of human failure within the wider context of systems' development. Three guiding principles are proposed to help designers anticipate operator errors. His qualitative approach contrasts strongly with the probabilistic approaches advocated within human reliability analysis. He addresses the central concerns of this special edition by proposing a methodology for the application of his guidelines. From my perspective, however, much remains to be done before the usefulness of this approach has been demonstrated for the day to day design tasks that face practicing engineers.

2.5 Focus on Accidents and Not Incidents

Most previous research has applied human error models to well known accidents, such as those listed in the introduction. This rhetorical device supports the exposition of the underlying ideas. Readers will be familiar with the broad details of the accident from previous research or from the media. This rhetorical device also helps to convince readers that human error models capture significant properties of 'real-world' accidents. However, it also introduces a number of problems. Most operator errors have minor consequences. They result in production problems but rarely lead to loss of life on the scale of the accidents that are analyzed in many research papers and textbooks. The bias towards major accidents provides a further justification for the commercial neglect of 'leading edge' techniques from human factors' research. Companies seem more concerned about the production costs of high frequency, low impact incidents than they are about very low frequency catastrophic errors.

Chambers, Croll and Bowell's paper provides a glimpse into the nature of these high frequency failures when they survey incidents involving programmable devices in small manufacturing enterprises. Their study is valuable because it brings us away from abstract modeling concepts, such as slips and lapses. It reinforces the problems that arise when users have to cope with faulty machine guards and hardwired trips. This work raises further questions. Their focus is on incidents that the UK Health and Safety Laboratory define to be an "occasion when the safety or health of an individual is adversely affected, or might have been adversely affected, by a failure of an industrial component or process". Human error is considered only to the extent that it affects the more general production and design processes that are intended to protect the user. A further concern is that the incidents are derived from an industry reporting system. Although these may be more typical of everyday failures than those examined in accident case studies, such reporting mechanisms are known to provide a very partial view of the human errors that affect many operators.

2.6 Focus on Single Users and Single Systems

There are further problems. Research into human error is dominated by studies of individual operators interacting with single systems. There is a reluctance to consider the additional levels of complexity that arise from team-based interaction with concurrent systems. This is justifiable given the limited understanding of human error mechanisms prior to the 1980s. However, it is increasingly difficult to defend when most production processes are tightly integrated through just-in-time supply chains. The ways in which many modern industries organize their factors of production creates problems for the commercial application of single user, error models. Trepess and Stockman's paper describes some of the problems that arise when taking group interaction into account. The issue of feedback, which is important in single user interaction, becomes critical as more operators seek to coordinate their activities. An emphasis on detailed planning is replaced by a need to support situation awareness and adaptation to changing contexts. Again, however, the authors provide only limited advice about how other analysts might extend and apply their error models to group interaction.

2.7 Focus on Operation and Not Regulation

This section has argued that research into human error has focussed upon rare, high profile accidents rather than less publicized, high-frequency incidents. Previous research into human error has also focussed upon single-users rather than teams and groups of operators. There are, of course, exceptions to these criticisms. There are also further areas of neglect. For instance, there has been relatively little work into the effectiveness of surveillance activities by regulatory organizations, such as the UK Health and Safety Executive. These activities provide a primary defense against the managerial and organizational factors that are increasingly believed to be the primary cause of operator failure. Allen and Abate's paper provides a useful insight into surveillance activities within the Federal Aviation Administration. They use task analysis to suggest changes in existing surveillance practices. This highly practical work illustrates the main concern of this editorial. Allen and Abate find only limited means of applying of human error modeling to inform FAA certification and monitoring activities.

2.8 Lack of Integration between Contextual Analysis and Requirements Analysis

Recent research has focussed upon the latent causes of human error. This work has examined the managerial and organisation context of interaction as well as the physical environment. Previous sections have argued, however, that this change of emphasis has had little or no impact upon commercial attitudes towards human error where there is a continuing preoccupation with individual erroneous actions by system operators. The relatively poor up-take of ideas about the contextual causes of human error is explained by the difficulty of applying these ideas during the design and operation of complex interactive systems. Maiden, Minocha, Sutcliffe, Manuel and Ryan provide some idea of the problems that arise when applying situated approaches to the requirements engineering process. Human errors can be a symptom of deeper problems in human-machine interaction, human-human communication, systems integration, work domain characteristics, organizational mismanagement and so on.

Design teams can affect many of the contextual factors that lead to human error. For instance, problems in human-machine interaction can be addressed through improved interface design. Many other factors cannot be dealt with. For example, systems engineers seldom have the authority to change the managerial and organizational structures that control interaction with complex systems. The novel point here is not to emphasize the importance of contextual factors leading to operator error, rather it is to emphasize how little has actually been done to help practicing engineers take these factors into account. Maiden Minocha, Sutcliffe, Manuel and Ryan propose scenarios for requirements capture as one solution to this problem. Case studies can be shown to management as a means of communicating the probable impact of current organizational practices on the operation of a potential application. An unresolved question in this work is "where do the scenarios come from?". Industry reporting schemes, such as that exploited by Chambers, Croll and Bowell, provide one solution. Scenarios can be directly derived from accounts of previous failures. Unfortunately, most reporting schemes do not capture the sorts of managerial and organizational failures that have been identified by contextual approaches to interface development. Existing databases tend to be populated by accounts of individual erroneous actions.

2.9 Little Regard for Human Error during Requirements Analysis

It is ironic that there has been so much research into human error analysis and yet so little attention has been paid to those who must apply the techniques. In particular, few safeguards prevent designers from making erroneous assumptions during the application of human error models. This concern is apparent in the paper by Viller, Bowers and Rodden. In particular, they argue that the usefulness of particular design techniques is determined by the social context in which requirements are gathered. For instance, poorly defined processes are likely to produce low quality results. They will also consume greater resources in the coordination and supervision of production. Although Viller, Bowers and Rodden take a relatively broad perspective on requirements gathering, it is sobering to think that they could be talking about many of the error modeling techniques that have been advocated by human factors' research.

Maiden, Minocha, Sutcliffe, Manuel and Ryan hint at the framing problems that affect contextual studies of human error. The commercial development and maintenance of safety-critical systems is constrained both by a budget and a schedule. It is, therefore, important that designers have a clear idea of the factors that they must take into account if they are to minimize the potential for human error during interaction. Unfortunately, there appears to be little agreement about what does and what does not need to be accounted for when analyzing the contextual factors that lead to human failure. Benyon-Davies' review of the London Ambulance Service dispatch system illustrates this problem. In particular, he argues that the different parties involved in the development and operation of an interactive system will each have a different view of the context of failure. The implications of this are extremely worrying. Viller, Bowers and Rodden have shown that without consensus, requirements analysis is liable to be a costly and error-prone activity. This applies to human error analysis just as it applies to systems development.

5. CONCLUSION

Human error analysis is useful. However, it is not as useful, nor is it as widely used, as many of its proponents would claim. There is still a widespread ignorance about the fundamental characteristics of human perception, physiology and cognition. It is, therefore, hardly surprising that few industries actively exploit contextual approaches to human error. This neglect can be explained in a number of ways:

- there is little methodological support for human error analysis;

- human error modeling techniques depend upon the subjective interpretation of experts;

- many techniques explain the causes of human error but do not support "run-time" predictions;

- many techniques explain human error but do not support "design-time" predictions;

- there has been a focus on human error in major accidents rather than lower impact incidents;

- the focus has been on individual failures rather than team errors with concurrent systems;

- the focus has been on operational errors rather than regulatory failures;

- it is hard to consider to organizational sources of error in conventional requirements analysis;

- few techniques help designers to reach consensus on the contextual sources of latent failures;

- too little has been done to reduce the scope for error during error analysis itself;

Most human factors' research is concerned with improving our understanding of human error. Very little of it can be directly applied to reduce the impact or frequency of those errors. . If the practical problems listed above are not addressed then there seems little likelihood that future models of human and organizational failure will be of any practical benefit.

ACKNOWLEDGEMENTS

Thanks are due to Glasgow Accident Analysis Group and Glasgow Interactive Systems Group. This work is supported by EPSRC grants GR/L27800 and GR/K55042.

REFERENCES

Air Accident Investigation Branch, Report on the Incident to Boeing 737-400 G-OBMM, Near Daventry on 23 February 1995, Report 3/96, Department of Transport, HMSO, London, 1996.

D. Busse and C.W. Johnson, Modeling Human Error within a Cognitive Theoretical Framework. In F. Ritter and R. Young (eds.), Second European Conference on Cognitive Modeling, Nottingham University Press, Nottingham, 1998.

C.W. Johnson and N. Leveson, editors, Proceedings of the First Workshop on Human Error and Systems Development, Glasgow Accident Analysis Group, Technical report G97/2, University of Glasgow, Scotland, 1998.

V. De Keyser, Temporal Decision Making in Complex Environments, Phil. Transactions of the Royal Society of London, B327, 569-576, 1990.

J. Rasmussen, Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering, Elsevier Science, Amsterdam, 1986.

J. Reason, Human Error, Cambridge University Press, Cambridge, 1990.

J. Reason, Managing the Risks of Organizational Accidents, Ashgate Press, Aldershot, 1997.

K. Vicente and J. Rasmussen, The Ecology of Human-Machine Systems II: Mediating Direct Perceptions in Complex Work Domains, Department of Mechanical and Industrial Engineering, University of Illisnois at Urbana Champaign, 1990.

W. van Vuuren, Organizational Failure, PhD thesis, Faculty of Technology Management, Eindhoven Technical University, 1998.

C. Wickens, Engineering Psychology and Human Performance. C. E. Merrill Publishing, London, 1984.