This article appeared in the proceedings of SAFECOMP'99. Coyright belongs with Springer Verlag Lecture Notes in Computer Science who have kindly agreed to allow publication on thi s web site.

For more on the analytical use of CAE diagrams to support accident reports see Improving Accident Reports.


A First Step Towards the Integration of Accident Reports and Constructive Design Documents

Chris Johnson

Glasgow Accident Analysis Group , Department of Computing Science,
University of Glasgow,
Email: johnson@dcs.gla.ac.uk

Abstract

Accident reports are intended to explain the causes of human error and system failure. They are based upon the evidence of many different teams of experts and are, typically, the result of a lengthy investigation process. They are important documents from an HCI perspective because they guide the intervention of regulatory authorities who must reduce the impact and frequency of human 'error' in the workplace. There are, however, a number of problems with current practice. In particular, there are no established techniques for using previous findings about human 'error' and systems 'failure' to inform subsequent design. This paper, therefore, shows how extensions to design rationale and contextual task analysis techniques can be used to avoid the weaknesses of existing accident reports.

Keywords: accident analysis; argument; human error; system failure; design rationale.

1. INTRODUCTION

Given the importance of accident reports for the development of interactive systems, it is surprising that there has been relatively little research into the usability and utility of these documents (Love and Johnson, 1997). The mass of relevant literature about safety-critical interface design (Norman, 1990, Reason, 1990) and even the usability of design documents in general (Moran and Carrol, 1995) is not matched in the field of accident reporting. There are some notable exceptions, for example, Prof. Van Der Schaaf heads a well established group at the Technical University of Eindhoven. However, this work is very much the exception. The bulk of human factors research continues to focus upon techniques that can be used to analyse the causes of human error rather than upon the delivery mechanisms that publicise those findings to practising interface designers and systems engineers. This paper, therefore, presents techniques that use findings about previous failures to inform the subsequent development of interactive systems.

1.1 The Embley Case Study

A collision between the bulk ship River Embley and the Royal Australian Naval patrol boat HMAS Fremantle will be used to illustrate the remainder of this paper (Marine Incident Investigation Unit, 1997). This accident has been chosen because it was the result of complex interactions between several different operators and several different systems. For instance, the crew of the River Embley were equipped with a GPS display, two radars, a gyro compass and bearing repeaters, automatic steering systems and a course recorder plotter. This collision was also the result of complex interactions between the various members of both crews. These interactions were affected by their on-board systems but also by individual levels of experience and training within the crews. Finally, this accident typifies the many 'near-misses' that contribute most to our understanding of human 'error' and system 'failure'. Nobody was seriously hurt and no pollution resulted from the collision.

At 21:00hrs on 13th March 1997, three patrol boats were approaching the Heath reef, part of the Great Barrier Reef, from the South. The River Embley was a deep draught vessel and so was obliged to keep to the Eastern side of a two-way route off the reef. VHF contact was established between the bridge of the HMAS Fremantle and the River Embley. A few minutes after 21:00, the lead patrol vessel Fremantle crossed ahead of the Embley followed by the second patrol boat, in line. The third vessel altered course to pass between the Embley and Heath reef. HMAS Fremantle made a number of small alterations to her course and at about 21:08 the rudder was put 20º to starboard. The patrol boat collided with the River Embley.

2. UNDERSTANDING THE CAUSES OF HUMAN ERROR AND SYSTEMS FAILURE

The purpose of the MIIU investigation 'is to identify the circumstances of an incident and determine its causes. All reports of the investigation are published to make the causes of an accident known within the industry so as to help prevent similar occurrences' (http://www.miiu.gov.au). This section, therefore, explains why it can be difficult for readers to identify the causes of human 'error' and systems 'failure' from conventional accident reports.

2.1 Locating the Evidence to Support an Argument

It can be difficult for readers to identify the causes of an accident because the evidence that supports a particular conclusion may be distributed over many different pages in a conventional accident report. For instance, the MIIU found that:

"The reasons for HMAS Fremantle's actions...involve a complex chain of human factors, which include, but are not limited to:

" (page 30)

The MIIU found that the Fremantle's crew lacked experience of encounters within the Great Barrier Reef. The evidence for this conclusion is presented on pages 8 and 16 of the report. The Fourth Officer was in charge in the run-up to the collision and he was undergoing watch keeping training:

"It (the Fremantle) normally operates with a crew of 23, but on 13 March the crew numbered 24. This included the Commanding Officer, the Executive Officer, the Navigating Officer and the Fourth and Fifth Officers, both under watch keeping training." (page 8).

"The Commanding Officer remained on the bridge monitoring the Fourth Officer until 21:20 when the Patrol Boat was off Hay Island. The Fourth Officer was fixing the ship's position every 6 minutes. Satisfied that the Fourth Officer was in complete control of the situation the Commanding Officer went to his cabin, about three metres from a flight of eight steps that led from the main deck to the bridge." (page 16)

This distribution of analysis and evidence creates significant problems for interface designers who must exploit the recommendations of accident reports to guide the subsequent development of navigation systems and training procedures. It is a non-trivial task to filter out the mass of contextual evidence presented between pages 8, 16 and 30 of the MIIU document. Unless they can do this, however, it would be difficult to trace the connection between the working practices on the Fremantle and the causes of the accident as presented in the conclusions of the report.

2.2 Implicit Arguments

A further problem is that readers must often re-construct the implicit arguments that are embedded within accident reports. For instance, the previous citation from page 30 argued that the Fremantle's crew were unaware of traffic on the Reef. This is supported by evidence on page 18 of the report. The Commanding Officer was unaware of the position of the Embley as he ordered the manoeuvre:

"The Commanding Officer asked what rudder angle had been ordered and the Fourth Officer told him 10š, and the Commanding Officer advised him to increase the angle to 20š. At this time he became aware of the voices on the VHF. Almost immediately the Commanding Officer saw a green light and became aware of a "great black wall". He immediately issued direct orders to the helmsman of "hard to starboard" and full astern" (page 18).

This argument was never explicitly made in the MIIU report. The reader is forced to infer a link between the evidence on page 18 and the conclusions on page 30. In this instance, the inference seems well justified. However, previous work has shown that many readers construct a mass of causal explanations and inferences that were never intended by the producers of an accident report (Love and Johnson, 1997).

2.3 Alternative Lines of Reasoning

It can often be difficult to identify alternative hypotheses about human 'error' and systems 'failure'. For instance, the first of the following quotations presents the MIIU argument that the River Embley's crew might have used an Aldis lamp to alert the Fremantle. The second quote is taken from the Master's submission to the MIIU in which he argues that the use of this signalling devices would not have helped to avoid the collision:

"With hindsight, it might have been better to use an Aldis lamp to attract the attention of the approaching vessel, under Rule 36. The Aldis lamp would also have illuminated the ship side and the expanse of hull between the foremast and the mainmast." (page 26)

"As the risk of, or impending, collision had only been observed by either vessel crew immediately before impact, and the sound signals - whose use was close at hand - not by hurrying some 10 meters to the wing (lighting an Aldis light in the wheelhouse would destroy night vision, and be unacceptable both aboard and during an inquiry), were "completed at or just before the moment of collision", use of the Aldis lamp was inappropriate in those brief moments" (page 33).

The layout of many conventional reports makes it difficult for readers to view both sides of such arguments. In our example, the MIIU analysis appears within the main body of the report while the Master's rebuttal is documented in an appendix after the conclusions of the report. Such a separation makes it difficult for designers to accurately assess the best means of avoiding future failures either through improved training practices or through the development of additional systems support.

3 Conclusion, Analysis and Evidence Diagrams

The previous section has argued that a number of problems prevent designers from accurately assessing the causes of human 'error' and systems 'failure' as they are documented in conventional accident reports. This section goes on to argue that a number of graphical techniques can be used to avoid these limitations.

3.1 Locating the Evidence to Support an Argument

Figure 1 uses a Conclusion, Analysis, Evidence diagram to represent the relationship between the MIIU's conclusions and the evidence that is presented within their report.

Figure 1: Conclusion, Analysis, Evidence (CAE) Diagram for the Fremantle Collision

The MIIU report concluded that the Fremantle's crew made several human 'errors'. These mistakes included their failure to complete adequate contingency and passage planning. This analysis is supported by evidence that the crew failed to identify the waters off Heath Reef as being restricted for deep draught vessels, see page 29 of the report. The human errors also included a lack of awareness about the other traffic on the reef. This is supported by evidence that both the Fourth Officer and the Commander assumed that the River Embley was some 2.5 miles away when they were, in fact, much closer. This evidence is cited on page 18 of the report. The Fremantle's crew also lacked experience of encounters within the Great Barrier Reef. This analysis depends upon two related pieces of evidence. Firstly, that the Fourth office was on the bridge in the lead up to the collision and secondly that this officer was undergoing training in watch keeping. Finally, human factors problems led to the collisions because the decision to apply 20 degrees of starboard helm was based upon incomplete and scanty information. The Commander's surprise at the consequences of his decision, cited on page 18 of the report, provide evidence for this assertion.

Figure 1 explicitly illustrates the way in which pieces of evidence contribute to an investigator's analysis of human 'error' and systems 'failure'. The intention is not that CAE diagrams should replace conventional reporting techniques but that they should provide an overview and structure for the argument that these documents contains. They provide a road map for the analysis that is presented in an accident report.

3.2 Implicit Arguments

CAE diagrams can also help readers to identify the implicit inferential chains that are a common feature of many accident reports (Johnson, 1997). For example, the previous section argued that there is no clear link in the conventional report between the conclusion of human error and the evidence that supports this conclusion. The following CAE diagram, therefore, shows how evidence on page 18 supports the claim on page 30 that the Fremantle's crew were unaware of other traffic on the Reef.

Figure 2: Using CAE Diagrams to Represent Implicit Arguments.

Not only is it important that accident reports explicitly represent the lines of analysis that support a particular conclusion, it is also important to record any evidence that might contradict such an argument. This is illustrated by the dotted line in Figure 2; there is evidence to suggest that the on-board systems did alert the crew of the Fremantle to the presence of the Embley but that the Fremantle's crew were unsure of its exact position. The following section builds on this argument to show how CAE diagrams can represent alternative lines of analysis and not simply contradictory evidence about operator 'error'.

3.3 Alternative Lines of Reasoning

CAE diagrams provide an overview of the arguments that both support and weaken particular conclusions. Figure 3 uses a dotted line to show the way in which the Master's argument about the use of the Aldis lamp called into question that of the MIIU investigator. In contrast to the conventional report, the relationship between these two lines of argument is explicitly represented within this diagram.

Figure 3: Using CAE Diagrams to Represent Alternative Arguments

There are strong differences between CAE diagrams and other notations that have been used to support accident analysis, such as Fault Trees (Love and Johnson, 1997). These formalisms are, typically, used to map out the timeline of operator 'error' and system 'failure' that leads to an accident. In contrast, CAE diagrams represent the analytic framework that is constructed from the evidence about those events. In this respect, our approach shares much in common with Ladkin, Gerdsmeier and Loer's WB graphs (1997).

4. LITERATE INVESTIGATIONS

The previous section has shown how diagrammatic techniques can be used to support the presentation of accident reports. This has important consequences for interface development. Firstly, CAE diagrams encourage accident investigators to explicitly document the evidence that supports claims about 'operator error'. Secondly, they help the readers of an accident report to trace and understand the arguments that lead to those claims. Such benefits are of little value, however, if designers cannot exploit the products of accident investigations to support development. This section, therefore, explains how techniques from design rationale (Moran and Carrol, 1995) and contextual task analysis (Cockton, Clarke, Gray and Johnson, 1996) can be used in conjunction with CAE diagrams to provide a link between the analytical techniques of accident investigations and the constructive techniques of interface development.

4.1 Design Rationale

CAE diagrams provide a graphical representation of the arguments that accident reports construct for the causes of human 'error' and systems 'failure'. In contrast, design rationale notations provide a graphical overview of the arguments that support development decisions (Buckingham Shum, 1995). For example Figure 4 illustrates some of the design options that might improve situation awareness on the Reef. The first option is to force all ships to notify their position to an existing monitoring system. This is supported by the criteria that it would provide an external means of ensuring that crews comply with regulations. The Reefrep system could monitor and log the reporting behaviour of each vessel. The development of such a system is not supported by the affect that it would have upon crew workload. The second design option is to use crew training procedures as a means of ensuring adequate levels of situation awareness. This is not supported by the possibility of performing external checks.

Figure 4: QOC diagram showing design options for improved situation awareness.

A major limitation with the previous diagram is that it provides little or no indication of the status or source of the criteria that are represented. In other words, we have no means of assessing the evidence that external checks are, indeed, difficult to perform on crew training practices. Such problems can be avoided by integrating design rationale techniques, such as the QOC notation shown in Figure 4, with previous findings about human 'error' and system 'failure' during previous accidents and incidents.

4.2 Using Accidents to Provide Contextual Support For Development Decisions

Figure 5 integrates CAE and QOC diagrams for the Fremantle collision. The CAE diagram represents the MIIU's finding that the crew were unaware of other ships in their vicinity. A link is then drawn to the QOC diagram to show that this finding justifies designers in considering how to improve situation awareness amongst the crews in the Reef area. It is important not to underestimate the benefits that such links provide. For instance, it is relatively easy to provide well considered solutions to the problems that are addressed in safety cases. It is less easy to know what problems should be anticipated during the development of safety-critical interfaces (Reason, 1990). By linking development documents directly to the products of accident investigations, it is possible to ensure that designers base their subsequent development decisions at least partly upon those problems that have arisen with previous applications.

Figure 5: Using Previous Operator 'Errors' to Justify Asking the Questions in QOC Diagrams.

Further links can be drawn between the analytical products of accident investigations and the constructive use of design rationale. For instance, evidence about previous operator 'errors' can be used to support particular lines of argument in a QOC diagram. Figure 6 illustrates this approach Relying upon improved training procedures, rather than a Reef reporting system, is not supported by the argument that external checks can be conducted to ensure compliance. This argument is, in turn, supported by the CAE analysis that the Fremantle's training procedures left them unprepared for their encounters on the Reef. Again, by integrating these two approaches, interface designers have an explicit means of demonstrating that the 'mistakes' of the past are being used to inform subsequent interface development. In this case, improved training procedures may have to be supported by automated systems.

Figure 6: Using Previous Operator 'Errors' to Justify Criteria in QOC Diagrams.

Much further work remains to be done. Although we have empirical evidence to support the use of design rationale, these findings must be extended to support the integration of CAE diagrams (Johnson, 1996). It is important to emphasise that doubts still remain about the syntax used in Figures 5 and 6. We are aware that the proliferation of hypertext links can lead to a complex tangle which frustrates navigation and interpretation by interface designers and regulatory authorities. Similarly, more work needs to be conducted to determine whether it is appropriate to constrain the semantics of the links between CAE and QOC diagrams. Our initial development of this technique has exploited informal guidelines about the precise nature of these connections. However, it is likely that these guidelines may have to be codified if this approach is to be used by teams of accident investigators and interface designers. We have gone at least part of the way towards resolving these problems through the development of tool support. Three dimensional navigation techniques using VRML (Virtual Reality Mark-up Language) support the visualisation and manipulation of the structures in Figures 5 and 6.

4. CONCLUSION AND FURTHER WORK

Accident reports are a primary mechanism by which designers can learn from the mistakes of the past. These documents analyse and explain the causes of human 'error' and systems 'failure'. Unfortunately, a range of recent work has identified limitations and weaknesses in conventional reporting techniques (Johnson, 1997, Ladkin et al 1997). It can be difficult for readers to locate the many different pieces of evidence that support particular arguments about operator 'error' and system 'failure'. These items of information can be scattered throughout the many different pages of an accident report. A second problem is that readers are often forced to reconstruct complex chains of inference in order to understand the implicit arguments that are embedded within accident reports. Finally, it can be difficult to identify alternative hypotheses about human factors problems and systems failures given existing reporting techniques.

This paper has argued that the graphical structures of Conclusion, Analysis, Evidence (CAE) diagrams can be used to avoid the problems mentioned above. These explicitly represent the relationship between evidence and lines of argument. They also provide a graphical overview of the competing lines of argument that might contradict particular interpretations of human 'error' and systems 'failure'. However, these diagrams do not directly support the subsequent development of interactive systems. We have, therefore, argued that design rationale techniques be integrated with the argumentation structures of CAE diagrams. This offers a number of benefits. In particular, the findings of previous accident investigations can be used to identify critical design questions for the subsequent development of interactive systems. Similarly, the arguments that support or weaken particular design options can be linked to the arguments in accident reports. Previous instances of operator 'error' can be cited to establish the importance of particular design criteria. This helps to ensure that evidence from previous accidents is considered when justifying future development decisions.

ACKNOWLEDGEMENTS

Thanks are due to the Australian Department of Transport and Regional Development and their Maritime Incident Investigation Unit. Their openness has greatly helped efforts to improve accident reporting. I would also like to acknowledge the support of the Glasgow Accident Analysis Group and Glasgow Interactive Systems Group. This work is supported by EPSRC grants GR/L27800 and GR/K55042.

REFERENCES

S. Buckingham Shum, Analysing The Usability Of A Design Rationale Notation. In T.P. Moran and J.M. Carroll (eds.), Design Rationale Concepts, Techniques And Use, Lawrence Erlbaum, Hillsdale, New Jersey, United States of America, 1995.

G. Cockton, S. Clark, P. Gray and C. W. Johnson, Literate Design. In D.J. Benyon and P. Palanque (eds.), Critical Issues in User System Engineering (CRUISE), 227-248. Springer Verlag, London, 1996.

C.W. Johnson, Literate Specification, The Software Engineering Journal (11)4:225-237, 1996.

C.W. Johnson, Proof, Politics and Bias in Accident Reports. In C.M. Holloway (ed.), Proceedings of the Fourth NASA Langley Formal Methods Workshop. NASA Technical Report Lfm-97, 1997.

C.W. Johnson, The Epistemics of Accidents, Journal of Human-Computer Systems, (47)659-688, 1997a.

P. Ladkin, T. Gerdsmeier and K. Loer, Analysing the Cali Accident With Why?...Because Graphs. In C.W. Johnson and N. Leveson (eds), Proceedings of Human Error and Systems Development, Glasgow Accident Analysis Group, Technical Report GAAG-TR-97-2, Glasgow, 1997.

L. Love and C.W. Johnson, AFTs: Accident Fault Trees. In H. Thimbleby, B. O'Conaill and P. Thomas (eds), People and Computers XII: Proceedings of HCI'97, 245-262, Springer Verlag, Berlin, 1997.

Maritime Incident Investigation Unit, Investigation into the Collision Between the Australian Bulk Ship River Embley and the Royal Australian Navy Patrol Boat HMAS Fremantle off Heath Reef at About 22:09 on 13 March 1997, Report 112, Australian Department of Transport and Regional Development, Canberra, Australia, 1997.

T.P. Moran and J.M. Carroll (eds.), Design Rationale Concepts, Techniques And Use, Lawrence Erlbaum, Hillsdale, New Jersey, United States of America, 1995.

D. Norman, The 'Problem' With Automation : Inappropriate Feedback And Interaction Not Over- automation. In D.E. Broadbent and J. Reason and A. Baddeley (eds.), Human Factors In Hazardous Situations, 137-145, Clarendon Press, Oxford, United Kingdom, 1990.

J. Reason, Human Error, Cambridge University Press, Cambridge, United Kingdom, 1990.