The Impact of Rhetoric on Accounts of

Human 'Error' in Accident Reports

 

 

Peter Snowdon and Chris Johnson.

 

 

Glasgow Accident Analysis Group, Department Of Computing Science,

University Of Glasgow, Glasgow, United Kingdom, G12 8QQ.

 

Tel: +44 141 339 8855 ext2917

Fax: +44 141 330 4913

EMail: snowdonp@dcs.gla.ac.uk

 

 

Accident reports provide interface designers with an important source of information about requirements that must be satisfied by future systems. However, the selective presentation of evidence and argument can adversely affect the readers of these documents. This paper argues that an appreciation of rhetoric can highlight these problems. A case study is used to show how rhetorical features can draw too much attention away from critical factors in human ‘error’ and system ‘failure’.

 

Keywords: Accident reports; human 'error'; safety-critical interaction; rhetoric.

 

 

1. Introduction

Rhetoric is defined as the: "1 the art of effective or persuasive thinking. 2 language designed to persuade or impress (often with an implication of insincerity or exaggeration etc)" (Oxford English Dictionary). More modern interpretations such as that offered by Billig (1989) suggests that rhetoric plays a larger role in the way we act, think and interact. Rather than seeing rhetoric as a ‘special case’ language employed in order to persuade it is seen as a fundamental component of cognition in general.

 

This paper argues that rhetorical techniques can bias the reading of accident reports. Rhetoric suggests that texts are structured by arguments and that this, in turn, constrains the content. The argumentative form of accident reports is intended to lead the reader towards particular conclusions. This can draw attention away from other hypotheses. This is particularly significant where claims about operator error can obscure the underlying causes of system failure. As a consequence, it can be difficult for interface designers to fully address the causes of major accidents (Reason, 1996).

 

1.1. Safety-Critical Human-Computer Interaction

The impact of automation on safety-critical interfaces has been widely discussed (Norman 1989; Wiener and Curry 1980). In order for designers to properly understand system requirements it has been argued that the model of human computer interaction has to include a wide-range of elements (Wahlstrom 1991). The focus has shifted from the individual to the social, managerial and environmental context of interaction. A central argument in this paper is that by using rhetorical techniques to focus upon claims about operator error, accident reports draw attention away from these wider factors. They may also obscure important failures in the systems and safety procedures that are intended to preserve the safety of an application. Rhetorical techniques, therefore, not only bias the reader towards particular conclusions but they can also prevent interface designers from fully understanding the context of failure. Accidents need to be understood in terms of the overall interactions between people, machines and the environment. They are, typically, not the result of any single failure by one component. In particular, they cannot be understood simply in terms of operator 'error' (Rasmussen 1986).

 

1.2. Case Study

The report used here was the National Transport Safety Board 0402 Controlled Collision With Terrain Transportes\Ejecutivos, SA (TAESA) Learjet 25D, XA-BBA Dulles International Airport Chantilly, Virginia June 18.1994 (MR-95/02 PB95-91). The choice is appropriate as it typifies the complex system failures and human errors that lead to many major accidents. It is also a minor accident and will have less political/public associations. This is important as the arguments of the report will be less affected by the need to appease interested parties or the casting/avoidance of blame which larger accidents may find harder to avoid.

 

The accident occurred on June the 18th 1994 at Dulles International airport. A Learjet 25D owned by Transportes Aeros Ejectutivos, S.A. (TAESA) crashed 0.8 nautical miles south of the runway threshold. The plane, XA-BBA, crashed during an attempted Instrument Landing System landing under difficult meteorological conditions. The plane collided with trees on the outer limits of the airport grounds. All 10 passengers and both pilots were killed. Thick fog had necessitated an instrument approach and the flight had made one missed approach before the second and last approach.

 

1.3. Outline Of the Paper

This paper argues that rhetorical devices can affect or bias the interpretation of human 'error' and systems 'failure' in accident reports. Argumentative structures focus the reader's attention on particular aspects of an accident. As a result, other hypotheses, that are not necessarily contradictory or incompatible with the main conclusion, are not given adequate consideration. Section 2 looks at a number of alternative conclusions that might be drawn from our case study. Section 3 uses our case study to develop a taxonomy of rhetorical devices. It is argued that these devices obscure some of the alternative hypotheses identified in Section 2. The final section draws conclusions from the study

 

2. Alternative Hypothesis

For every accident there are a number of alternative conclusions concerning the cause. Accident reports try to identify a single causal explanation. As we have argued, however, it is important for interface designers to be aware of alternative hypotheses and also the wider contextual factors that surround major failures. In the accident report used here while pilot error is the direct cause there are other factors to be considered.

 

2.1. Other Operators: Air Traffic Control

The argument that the pilot is to blame obscures the role that others, such as Air Traffic Control, had in the incident. This aspect of the accident is not fully documented in the report. It was concluded that:

 

    1. Air Traffic Control services provided to XA-BBA were in accordance with procedures outlined in FAA Order 7110.65 Air Traffic Control (p10)

 

However, Air Traffic Control did play a key role in the interactions that led to the accident. Any attempt to improve the support that the pilot received, either through automated systems or improved training procedures, must carefully examine the role that Air Traffic Control played in the accident. For example, D-BRITE radar was used to provide the controller with a radar picture of the planes' flight positions. This was set for an overhead position not a glide path (p26/27). This prevented the Controller from accurately observing the problems that the pilt was experiencing. There were further problems with this support system. The landing was erratic but within glidescope controls. The controller did not, therefore, receive a warning about the flightpath.

 

Further human factor problems focus upon the role of Air Traffic Control in this accident. The controller's attention was on the flight paths of all the planes: "During the course of periodically scanning the radar, his primary concerns would have been the airplane’s distance from the airport and separation from the other aircraft" (p25). Thus the individual flight plan was not given critical attention. The workload on the ATC is high, for example in order to determine the plane’s descent: "The altitude readout is shown in hundreds of feet. To determine an aircraft’s rate of descent, a controller would have to continually monitor both the altitude readout and the aircraft’s progress toward the runway". However, the focus on pilot error in the NTSB report leaves little room for a detailed discussion of this critical topic.

 

2.2. Technical Failure

Several technical faults were found in aftermath of the accident. These had an important impact upon the information that was displayed both to the pilot and the Air Traffic Controllers. For instance, the Instrument Landing System (p11) was set incorrectly. The Low Level Windshear Alert System was based on another airports geographical data (see appendix D of the report). The report argues that these failures did not directly contribute to the accident. However, these technical failures are symptomatic of a general management problem that would not have been highlighted unless the accident had occurred. Wahlstrom (1991) points to the need to understand the managerial context of error.

 

2.3. Human Factors

The focus upon human 'error' in the NTSB report not only diverts attention away from the problems experienced by other users and a range of technical failures, it can also obscure the ways in which safety systems failed to provide necessary information during the accident.

 

 

 

The many different areas of attention means that a bias towards one can limit the attention paid to others. As designers there is an increasing burden on us to understand the larger domain. As readers we need to be aware of bias so that we do not focus on too small an area. The next section argues that rhetorical techniques focus the designer's attention away from other aspects of human 'error' and system 'failure'.

 

3. Sources of Bias

This section argues that rhetorical affects can influence both the structure and language used in an accident report. Rhetorical devices, such as emphasis, inclusion/exclusion, arguments on validity and presentation all help to bias the designer's interpretation of the events leading to major accidents.

 

3.1. Partial Evidence

The report is made up of three main sections: Evidence, analysis and conclusions/recommendations. The chart below gives the breakdown of the number of pages for each section.

 

While accident reports are often long documents partial evidence is inevitable. A complete record of events is impossible and therefore an accident report will try to focus on those areas that the analyst believes to be 'critical'. In an accident such as this one, where there seems to be no evidence of mechanical failure, the onus is on the report to try and find the cause that best fits the information to hand. The main focus is on providing an argument or evidence to support the conclusion that the pilot is to blame.

 

Due to this restriction on the amount of evidence put forward there are areas that receive less attention. The Air Traffic Control for instance is ruled out as a cause and so the facts of their involvement in the accident are not fully included. As readers we will be constrained by the line of argument, as we will not have all the information to hand. In some respects we rely on the report’s decision to leave out certain details in the interest of economy.

 

    1. Biasing Expectations

The report works at two levels: there is the overall argument that is being put forward and then there are the smaller rhetorical devices used to define and refine the points made. Within this report the main argument is stated at the start in the executive summary of the report, which is also the report’s conclusion:

 

The national transportation safety board determines that the probable causes of the accident were the poor decisionmaking, poor airmanship, and relative inexperience of the captain in initiating and continuing an unstabilised instrument approach that led to descent below the authorised altitude without visual contact with the runway environment. Contributing to the cause of the accident was the lack of a Ground Proximity Warning System on the Airplane. (P3 and P32)

 

By re-iterating the determined cause of the accident the reader is given a biased expectation that the report can then fulfil.

 

3.3. Subjective Interpretation of Evidence

Rhetoric is not a quantifiable concept. The recognition of bias is, therefore, a qualitative assessment. Evidence, which on the surface could be neutral, can be used to support an argument. For instance the case is made by the report that the landing was possible. This is proved in the report by referring to the former and latter planes in the landing schedule:

 

UAL102 completed a Category m approach at 0610:22, and reported clearing the runway. AA 74 completed a Category m approach at 0617, and reported clearing the runway. These approaches occurred before and after the first approach by XA-BBA. (P11)

 

The use of this evidence helps to establish a context in which the landing of the aircraft was clearly possible. It ignores the specific circumstances of that landing by focusing on those before and after it. There is little mention of the difficulties faces by other planes, such UAL 186's decision to re-route to another airport.

 

3.4. Use of Indirect Evidence

The central argument is that the pilot was not properly qualified to land the plane under abnormal conditions. However, there is little direct evidence, such as cockpit voice recordings, to prove that the pilot was to blame. In consequence, the report uses less direct evidence to back up the hypothesis that the pilot was responsible for the accident. For example, private comments from the pilot's simulator instructor are included, even though these were not intended for external publication:

 

The confidential evaluation of the other TAESA applicant, who was the accident captain’s partner in the upgrade training, stated, in part: During (his) simulator training, he demonstrated satisfactory pilot skills when flying the aircraft under normal conditions. (P8)

 

This citation is ambiguous. It is not certain whether the report refers to the captain of the aircraft or to their training partner.

 

3.5. Refutation

In order to prove a case it is necessary to disprove likely alternatives. Section 2.4 of the report gives a good example of such refutation. It is hypothesised that freak weather conditions or the turbulence created by another plane might have caused the Learjet to deviate. This is examined briefly and refuted by using evidence from the crash and surrounding indicators:

 

The possibility of turbulence causing the erratic flightpath was rejected because of the stable weather and the other approaches flown by airplanes at the time. Wake turbulence from the AA 74 was rejected because of the 9-minute separation between the two aircraft. (P27)

 

Although such hypotheses do broaden the scope of the analysis, there is no justification for the decision to consider these possibilities and yet not to consider in detail the possibility of technical or managerial failure.

 

3.6. Typographical Emphasis

Typography provides a further rhetorical device. The manner in which the text is written can have a bearing on the way it is read and interpreted. For example: "This anomaly was in the monitoring phase only, and the problems were corrected." The use of Italics as emphasis draws attention to the statement. This rhetorical technique both indicates and emphasises to the reader that the problem has been resolved.

 

One of the problems in using typography to emphasise key points is that it may dissuade readers from questioning those arguments. "The postaccident survey of the equipment revealed that the approach light system monitor was malfunctioning". This creates a potential contradiction. If we assume that the malfunction extended backward to the time of the accident then some failures were not corrected. The use of such typographical devices can dissuade designers from asking questions that are necessary for an understanding of the wider context that surrounds human 'error' and systems 'failure'.

 

4. Conclusion

This paper argues that unless interface designers consider the rhetorical effects in accident reports then it will be difficult for them to fully appreciate the context of human 'error' and system 'failure'. This, in turn, can hinder the subsequent development of interactive systems. We have, therefore, identified a number of rhetorical devices that are commonly used in accident reports:

 

 

We have argued that these rhetorical devices focus the reader's attention towards one major conclusion. This focus is, typically, upon causal aspects and not upon a contextual analysis of the system as a whole. In our case study, the accident was the result of complex interactions between the pilot, the Air Traffic Controller and their systems. The NTSB report, however, concludes that the pilot crashed the plane due to incompetence.

 

Acknowledgements

Thanks are due to the other members of the Glasgow Accident Analysis Group. The work is supported by EPSRC grants GR/L27800 and GR/K55042.

 

References

 

Billig, M (1996), Arguing and Thinking: A rhetorical approach to social psychology. University of Cambridge

 

Norman, D A. (1989), The "Problem" Of Automation: Inappropriate Feedback And Interaction, Not "Overautomation", ICS Report 8904

 

Norman, D A (1990), Commentary: Human Error And The Design Of Computer Systems. Communications Of The ACM, January 1990 Vol 33 Number 1

 

Taylor, D H. (1987), The Hermeneutics Of Accidents And Safety. In New Technology And Human Error Edited By Rasmussen, Duncan And Leplat. John Wiley & Sons Ltd.

 

Wahlstrom, B (1991) Influence Of Organisation And Management On Human Errors. Probabilistic Safety Assessment and Management. Vol 1, Elsevier Science Publishing Co., Ltd.

 

Wells, J E and Ryan, T G (1991) Integrating Human Factors Expertise Into The PRA Process. Probabilistic Safety Assessment and Management. Vol 1, Elsevier Science Publishing Co., Ltd.

 

Gilmore, D (1996) The Trouble With Usability: People, Computers And Organisations. Unpublished

 

Suchman, L (1987), The problem of human machine communication, Cambridge University Press

 

Reason, J (1996), Human Error, Cambridge University Press

 

Rasmussen, J (1986) Human Errors. In New Technology and Human Error Edited By Rasmussen, Duncan And Leplat. John Wiley & Sons Ltd.

 

Wiener, E L and Curry, R E (1980). Flight-deck automation: promises and problems. Ergonomics, 1980, vol. 23, no. 10, pp 995-1011.

 

Leveson, N G (1995). Safeware: System Safety and Computers. Addison-Wellsey Publishing Company, Inc.