Risk Oriented Data Capture:
Staged Modelling and Sampling Solutions for Problems with Data Overload in clinical settings;
Parallels and Paradigms

John Arthur and Henry Wynn,
Warwick Risk Initiative, University of Warwick , England



Abstract

In successful incident reporting systems there is a problem in dealing with data volume. This is complicated in that the high number of incidents is in a dynamic relationship with incident severity, (re)action feedback, ongoing data capture, data analysis and reporting level changes. Cause identification methodologies have sought to build core models of error. These are distinguished by interactive causalities, taking the exclusive emphasis away from human agents and sharing it with technological and organisational aspects. The problem of clinical incidents is further complicated by the dynamic nature of the clinical environment and its capacity to generate new incidents e.g. the introduction of a new technology. This paper proposes that risk oriented data capture is a technique which suits the design of an incident capture system and provides a flexible and dynamic alternative in the problem of data volume. Parallels are drawn between this and two applied case studies. The operational definition of risk and its application to data capture and/or analysis is discussed. This is seen as a useful sister process to root cause identification complimenting it by offering the possibility of dynamic modelling of error and the streamlining of the data analysis tasks.

Introduction

Incident reporting is an intense data-capture activity. Reporting should include near miss identification, dealt with in detail by Reason 1998 (ref. 1) but a universal idea. Also, if possible, some form of remedial strategy identification should be made. These three types of data provide the fullest possible picture. In any large setting such comprehensive activity, as well as being time consuming, creates a huge amount of data. This effort should be compensated by its results. It can be very effective results in terms of increased systems knowledge and identification of the causes, and root causes, of errors, again a widely endorsed idea. Results have to be applied to existing systems, e.g. in quality management. Integration with existing systems is preferable to complete innovation outwith it, e.g. see Battles et al 1998 (ref. 2).

Incident reporting systems, if they are a positive experience for the source employees, can become a victim of their own success. By their nature they provide instability. They generate an expectation for both their own perpetuation and oftentimes a rolling review of practices. This is the necessary ‘teeth’ to promote their validity to already busy people. The organisational "power-to-weight ratio" of the system is the key issue for their workload, maintenance and aforementioned integration. Efforts to reduce the weight (complexity) would be welcomed but the problem is that much of the validity of the methodology relies on maintaining, or even expanding, its rigour. This can be seen by the proliferation of systematic models and taxonomies from different disciplines: Failure Modes and Effects Analysis, Influence Diagrams, Fault Trees, Hierarchical Task Analysis, Hazard and Operability Studies, Tripod Delta, Medical Event Reporting System are a few examples. Thus reducing complexity without a loss of rigour leaves only a few avenues for development.

Why risk: The purpose of this paper is to outline some possibilities for these avenues. A central procedure is proposed, that of clarifying design issues of the data-capture, and then operating a continuous monitoring system. In putting forward such a scheme we will draw on related areas of modern management and statistically based quality improvement and, most importantly, risk-mediated design and data capture.

Risk must lie at the core of any approach because the long term aim is to minimise the risk in some safety-critical area. It is unfortunate but typical that the ‘risk(s)’ itself tends to remain at a high order conceptual level, rather than be distilled into operational definitions. The net result seems to be in part that risk management systems are viewed as an eventual goal or emergent property of the incident capture process. We would like to argue that it is at the operational level where risk of most practical use and the control of operational risk should be an explicit pre-cursor to the design of the incident capture. This is because too discrete a process e.g. continuous single error/near miss identification, does not always lend itself to integrated actions. Too compound a process e.g. strategic clinical risk management, may lack sufficient operational detail (although it draws its validity from another very important source). A simplified continuum can be proposed as shown in fig one:

Fig one

The focus of immature error capture systems is often cause identification from incident reports which initially are somewhat ‘in a vacuum’. Many theorists concede that this is at least a mix of organisational factors in event reactivity and system proactivity, acceptability etc. As a system matures, its associated data usually becomes cumbersome, particularly if clarification processes are needed. It is at this point that the focus shifts to some form of risk based data handling model, however basic. We suggest that risk based modelling approaches should be used from the outset. Such models should identify the interaction between causes and operational risks using these interactions, and not causes alone, to direct future data capture and analysis process. The paradox is simple to sketch out. Does one begin with a simple expedient system to capture errors and deal with the complexity problems of the data later, or does one begin with a complex system and refine through controlled validation.

Data-directed versus action-directed data-capture:

Volume is not the sole data-related problem incident systems have. At this stage we will consider a continuum between data-oriented and action-oriented incident collection.

Data-oriented: Incidents may simply lead to a data exercise. No directed action per se occurs, but strategic level planning is greatly informed. Ideally this approach should be a concrete decision i.e. potential users are clearly identified and designed into the information form. The strategic level data use can sometimes be to compare centres or units under study on raw contours. This kind of broad-brush, multipurpose, data-collection has an important advantage in inclusiveness. This potentially captures the full nature of the process, if there is time to make use of the data. It is essentially a data mining approach: collect large volumes of data for several reasons and use part of it to assess or detect incidents. Examples abound: such as credit analysis by banks, trying to detect telephone fraud from telephone use data, detecting computer hacking from computer-use data or tracing cause of failure in automobiles from warranty data.

Action-oriented: In this case the aim is only to collect data specifically related to the error reduction actions to be taken as part of the intelligent monitoring highlighted above. Feedback is an essential element in this type of system. Typically this model has discrete well-defined user groups.

The data volume problem

Whether a data-oriented technique or action-oriented approach is chosen, they share a common medium term problem of data volume. Either system, if accepted, typically creates a lot of data. Kaplan et al 1998 (ref. 3) note in the case of blood transfusion:

"The goal of error management should be to increase error detection and the reporting rate."

There are however some complexities from an operational risk point of view in such a data hungry goal. It also deviates from one of the cannons of quality management: decrease dependence on inspection but improve the process. Simplistically it would be more desirable for the number of reports to decrease due to error control. One needs to be able to detect if such a drop is an actual reduction in incidents and not a loss of confidence in the reporting system. Kaplan et al (op cit) suggest that the relationship between detection and reporting has a severity axis. The "event severity level" is seen as ideally dropping whilst the reporting rates remain fairly constant. This may be a stylised view which does not take into account the operational value of continuous reporting of low level events which has no visible effect on systems. Nor indeed does it overcome data volume problems in large scale situations.

‘Common Sense’ Solutions to Data Volume Problems

If reducing the workload in incident systems becomes a priority two common sense routes are popular: curtailing data; curtailing analysis. Curtailing the data is often seen as high risk. Later queries, unknown at present, are dependent on the availability of data. The received wisdom is that it is better to have too much data than not enough. This may lead to data stockpiling which is very common in large companies in many areas, and is one reason for the rapid growth in data-mining technologies. Do we have a good data mining reference

Data solution: This may be approached by reducing the data processing, we shall return briefly to the notion of ?? cost in the final section. The data level can be controlled, for example, maximising the use of numeric coding and minimising free text. The data entry may be accelerated through automatic options e.g. Optical Character Recognition (OCR). However these simplification approaches have a cost associated with accuracy and meaning which creates problems for the necessary rigour.

Analysis solution: These seek to streamline the analysis activity. Analysis may be stratified. Checklist approaches are often seen as a simple error-free way to quickly process data. The data may then be revisited on a need basis in the future. However, with very large numbers of incidents, even this approach can be onerous.

Mixed solution: These compromise using some salience drivers, e.g. incidents are given basic processing and some are chosen to be more in-depth. This choice may tend to be mediated by factors like novelty and repetition. Ultimately success is dependent on the intelligence of the database. That intelligence may initially be very neutral. Any approach to handling or modelling in incident systems with pre-existent low level of data structure is a complex trade off between time, costs and thoroughness.

Risk oriented solution: It is certain that data volume is a complex function of detection sensitivity, detection severity, feedback strength, actions, strategic purpose and a host of other organisational factors. It should be recognised therefore that incident capture has an already highly complex environment. It is crucial, once this is recognised, that the design of a system confronts complexity and does not roll out with an overly simplified aim of collecting and analysing incidents. Even with the more complex dynamics set aside, data volume creates an increase in the associated high level of "intelligent administration". This workload is directly correlated with system scope, e.g. regional pilot reporting system compared to a nation-wide version. Sheer volume of data not only pressurises efficiency and resources but it creates emergent system risk from computational factors and human factors in data handling.

In the final section we will sketch in more detail a number of statistical and decision-theoretic tools which can form the basis of this risk-oriented approach. It should be noted however that any new methodology should be parsimonious with respect to the factors above. A simplification must relate to the entire life cycle of the incident capture system. Any methodology must realise that system success, reputation and continuous acceptance require a well designed maintenance strategy.

Risk Oriented: System or Data?

Solutions are required not only to deal with the data volume problem but to maintain the best possible practice in incident capture within the constraints of what is practical. It is rare that significant new resources will be made available to do this and so the problem is one of directedness and efficiency of processes. This is to say that the data collection ultimately has to be about managing operational risk in some systematic form. Therefore the data capture process should have an a priori risk orientation, this will direct data sampling through species of modelling possibilities discussed later. It is not sufficient however just to get a different of data or even to model with it. Several human–systems paradigms are important to note for a source of fresh ideas. McCleod 1998 (refs. 4-5) gives a complex and fascinating account of an emerging distinction between systems (in his case machines) which give information and those which give advice. The error system one would ideally design for clinical situations would have some interactive notions i.e. it would advise practice. He notes:

These advances include new methods of data filtering, data fusion, and aids to assist the human operator to interpret and associate the fused data within their operating environment e.g. Knowledge Based Systems (KBS).

In particular he wants to emphasise that

Increasingly, with programmable electronic systems, the determination of system hazards must consider the work of the designer, the system physical build, and the tasks of the human operator. Complex systems design should assist the human operator to avoid serious errors and to achieve recovery from errors. The issue is not just a matter of engineered reliability or human reliability.

Furthermore if the system, however technically sophisticated, is to be used by humans then the risks inherent in that very relationship and how it is represented are a key factor. Speaking of the dulling of creativity which levels of automation can bring to health care settings Satchell 1998 (ref. 6) cautions:

While competencies and knowledge are as much issues in the medical profession as any other, health systems have generally shown little interest in their human resource, and have responded reactively to performance variation with overt behavioural change programmes. Task sharing technology struggles in this type of situation, and can amplify inappropriate behaviour as easily as appropriate behaviour.

The most obvious flaw in any data collection is a lack of orientation or to put it another way, uncertain or crude design. In the case of blood safety in the UK James 1997 (ref. 7) reports

Despite a massive scientific literature which speaks to several distinct aspects of this topic – including disease prevalence and incidence and laboratory test performance – a simple and comprehensive conceptual model has not been articulated.

The collection of undirected data is not confined to clinical errors Crossland et al 1998 (ref. 8) note in their survey of current practice in managing design risk:

It seems unfortunate that of the many companies who take the trouble to collect data, only very few collect quantitative data which allow for meaningful rolling-up of complex sources of risk

We would argue that McCleod’s advice-giving fused data, Satchell’s concern for the human element, James’ coherent model of safety and Crosslands meaningful rolling up are diverse examples of the need for risk oriented data capture approach. In short some pre-existent models at work a priori to data collection. Conventional cause identification paradigms use judgements about frequency or severity or novelty of an incident. Whilst these are essential building blocks in understanding causes something further is needed. There are two predominant issues, one is ‘soft’ the other ‘hard’.

The next sections will deal with these two issues. Operational risk will be considered by briefly examining two real case studies of a ‘risk issue’ in keyhole surgery training. Guidelines for defining operational risk will be suggested. Risk oriented data capture models will then briefly be described and their applicability to incident capture discussed.

Risk

The methodology should consider the ‘live’ risks, coming from the incidents themselves, and ‘inert’ risks, coming from the process of recording, encoding, analysing and classifying the incidents. Most importantly the risk in reacting to the incidents has to be uncovered. For this purpose it is crucial to be absolutely clear about what constitutes a risk and how it functions. As the following case studies show vague definitions an early stage can deeply compromise coping system design and either fail to deal with identified risks or introduce new ones.

Case study one: some experience in the surgical area: One author was a researcher on The Minimal Access Therapy Training Initiative Evaluation carried out for the Dept. of Health 1996 – 1998. (ref. 9)

The use of training on simulations is a relatively recent development in laparoscopic surgery. It follows catastrophic mistakes in the late 1980’s and early 1990’s from training on live patients. Reviews around that time recommended error directed tuition in a safe laboratory environment i.e. a reduction in the overall risk to patients from surgeons in training. This was an upstream reaction to root causes (human only), to wider organisational issues, to political issues and to ethics. Such training systems needed to be designed to:

Not introducing new problems was largely ignored as the main ‘ethos’ of training remained the same. Whilst a list of technical risks (akin here to simple causes) were identified industriously, through ad-hoc analysis of expert knowledge, operational risks were not identified or acted upon. This was for a number of reasons including:

A comprehensive incident capture system in general surgery in the UK has never been attempted. There are enormous restrictions on the possibility in the US, for a good discussion see Laing 1997 (ref. 10). Capturing a list of technical incidents which must be avoided, although an immensely powerful tool for increasing safety, was not sufficient for adequate training system design. This is because it was more akin to curriculum development than to safety critical training. It still left untouched a whole species of errors in the more powerful human dynamics at the interface between current professional judgement and training. These were to prove paradoxical.

Logically it was the population of trainers, i.e. the experienced surgeons at the time of the errors becoming widely noticed, who were "responsible" for the problem. It was however the trainees, i.e. those surgeons not yet trained, upon whom the remedial actions were taken. The actions were mediated by the current practice and views of the experienced surgeons in two important ways:

Attempting risk oriented training design was an important step. In this case the risks (error possibilities) were articulated at too low a level, i.e. mainly practical intra-operative considerations. In consequence the risk (generalised ‘bad’ outcome) to patients from surgeons in training was reduced but by a far smaller increment than could have been expected from the rhetoric of the reform. Without identification, agreement and definition of observable risk sources as live behaviours, the interface between training and practice is still more dangerous than it need be. In consequence the whole profession is still under a great deal of pressure in the UK following recent unfortunate deaths.

Case study two: Experiences from simulation design in Arthroscopic surgery: The authors are working with Sheffield University medical physics Department in the development of their Virtual Reality Knee Arthroscopy Training System.

The applied problem of a safety critical trainer for arthroscopy is the same as that in the previous case, namely, increasing surgeon safety and therefore increasing patient safety. Unlike the previous case this is an example of a technology-based solution to risk. Technology is particularly acceptable to the surgical professions because:

The model needed to create the simulator is far more complex than that needed to create training. However, it should be based on the same articulation of what the risks are and how they operate. The prevalence of compromise and trade off in this kind of application is very marked, for a discussion see Arthur et al 1998 & McCarthy ?? (refs. 11-12). For example prototype testing has indicated a need for haptic feedback mechanisms which are enormously complex. So complex are they that the problem quickly and predictably shifts to a related technical problem of simulator capability. High levels of VR specification may be defensible for commercial, aesthetic, technical challenge and face validity reasons. However a design approach which refocuses on technology away from the original risks is a key risk driver for product failure. This is because, irrespective of the technological prowess of the simulator, the risks do in fact remain cultural, attitudinal, motivational and organisational issues. These define the imperative applied context of the final product. If the production of the error reducing system (trainer) is not cognisant of this it is in danger of introducing new latent risks into the organisation in a similar way to any other new or revised technology.

Points from the Case Studies

These examples illustrate that too small (error by error), and too large (quick fix?) definitions of risk muddy the waters when it comes to the design of a complex system. The efficacy of the risk reduction function of the initiatives was greatly decreased. Inevitable data overload problems associated with rigorous incident capture systems is another excellent example of such a problem. Re-casting the problem as a design issue creates an interesting paradigm shift for incident capture. It suggests one ought to treat it as any other design problem. In particular then it should pay attention to:

The Whole Human Context

Risk is a notoriously difficult term to define, the Royal Statistical Society abandoned lengthy attempts to do so. In practice however, all the information needed to define risk can be found within the behavioural language of an organisation. It is for this reason that we pursue the idea of operational risk definition. Since there is no parameter for the accuracy or purity of a risk definition what really matters is that it makes sense, can be accepted and is agreed. Incident capture systems which create root cause taxonomies are an excellent starting point for an operational definition of risk. Some might suggest that root causes are the correct form of risk but we would argue that there has to be a working model at the next level. This is based on synthesis between groups of causes and between those groups and the individual organisation members experiences and comprehension. Operational risks are a parsimonious world model of how risks behave which is particular to an organisation or even a part of it. To be operational risk definitions have to fulfil a set of base criteria.

Fulfilling these criteria creates better building blocks for decision making in the light of understanding the causes of errors. This avoids naïve notions like ‘risk free’ working and embraces the idea of situation-inherent risks, observable risk tolerance and risk sharing/contacting. For health care professionals in particular such a model creates a structural basis for a host of tactical decisions about dealing with the causes of risk. This has hitherto been impeded by multiple, diverse, vague and even fashionable definitions of risk. Incident capture coupled to operational risk definitions creates the possibility of auditable decisions which are data driven (evidence based?) and operationally grounded.

A key problem to overcome, to achieve usable risk concepts like those above, is the uncertainty in handling large amounts of data which is about risky events and risk in events. The following sections will deal with this notion of the risk orientation of data gathering.

Risk Orientation Models

On-line off-line: risk management as control:

Among many models for risk management one of the oldest is control, e.g. see Hotteling 1947 (ref. 13), sometimes called automatic control. In this model the long-term strategy is to bring a process under control. In the controlled state the process can be maintained in stability with short-term feedback form observation to control action. Before the control systems is set up there should be considerable off-line activity leading to the design and implementation of the control system. For, example the control systems may incorporate some empirically validated model of the real-world process being controlled. Alternatively certain parameters (set points) may need to be set taking the particular situation into account, such as the nature of the batch or raw material in a plant.

Simplistically then, one can talk about off-line that is ‘design’, versus the on-line automatic control that is ‘performance’.

Of course, life is not that simple. In a continuous incident reporting system there may not be an automatic control action. However there ought to be some types, or classification, of action which are laid down. This weaker, more manual, version of control is analogous the statistical process control, MacGregor 1994 (ref. 14). The rule there is to look for special (local) cause or common (root) cause. In the more sophisticated, "intelligent", versions one may do a special analysis to detect patterns of root cause and link patterns via data-base searches etc.

Such is the technology of, for example, multivariate control charting. There a number of performance characteristics are measured through time. Clearly one can monitor each one individually and take action when one or a number exceed their control limits. More sophisticated versions build multivariate boundaries so that when the whole vector of characteristics crosses the boundary action is flagged. Over time the nature of the crossing can be linked to root cause. It is possible to do simultaneous statistical analysis in that, for example one can decide to monitor a few principle components or other multivariate statistics. These can include not just means but also variation itself, by analogy with monitoring volatility in share prices.

Having identified the pattern of causes one then proceeds to mediate/eliminate it. It is this "intelligent monitoring" model of mediation or elimination, weaker mathematically than automatic control, but stronger than simple control charting that is needed. It is then clear to see that it is risk metrics defining the link to risk reducing actions which will serve to strengthen the "intelligence" of the database.

 

Basic process for the design of an incident reporting system:

The static phase: The static phase should provide all the necessary information for the design of the dynamic phase and it is here that risk methodologies are crucial. It is not enough to design the dynamic phase only on the basis of historical data alone. As explained, such data will usually not have been collected for this purpose. Rather, the static phase must be directed towards the design of the dynamic phase. It will usually be more data intensive, conducted over a short period and on one or only a few sites. It should seek at least to

Most importantly this phase will deliver a list of key risk features (drivers, attributes, indicators). These are to be monitored at the heart of the dynamic phase. In clinical incidents capture that is the medium term data overload. Analysis will take advantage of established lead indicators some of which may be proxies for others and reflect special and root causes. Thus these lead indicators will have been derived from careful modelling during the first phase.

The design phase: The usual iterative steps amount to (i) capturing the specifications (ii) translating them into the high level functionality and thence to (iii) detailed architecture and prototyping.

Some of the above discussion can be seen as the beginnings of a specification. To be a little more comprehensive it is now possible to claim that the following are likely to be in the specification.

The actual functional design does not concern us here. There may be many.

Ways of meeting these specifications or criteria: Elsewhere Arthur et al (op cit) we have argued that it is dangerous to allow the technology to drive the design. The technology should be the slave of the requirements. A technically smart method of data capture may not of itself meet data capture requirements in terms of the risk objectives nor itself be risk free.

More interesting is the prototype or first version of the dynamic system. This is not the same thing as the static part of the methodology. To rush forward and test a prototype ahead of a initial phase assessment is dangerous, in the same way as doing clinic trials on a drug before the in vitro or early phase have been gone through. A prototype for example could be a paper version to be tested before a computerised version.

Operational risk: running the dynamic part: The main risk indicators to be used in expanding from the initial phase and pilot to "real" operation will have been caught at the initial stage. Other events may have been foreseen, such as failures of equipment. These should be considered as part of a wider risk control methodology. For example the incident capture could be for surgery, but power failures would have contingencies.

The dynamic capture should be linked to a feedback system for correction of smaller items and swift action on lager items. The flavour of this on-line control system is that these procedures should be automatic and accepted. In quality improvement one is taught that in-line measurement connected to process control is to be preferred to end of line inspection which requires cumbersome trace-back and is "after the horse has bolted". Thus, the accent is on continuous improvement of the process being measured. There may be unforeseen risks associated with the dynamic capture, and it may be that these arise in some aspect of acceptance.

Information and Risk

We now want to set up in more detail a model for understanding risk-directed data capture. Fortunately there are a number of areas in which the idea features, in some form, of directing attention to what "matters" . Early work in cognitive science argued that attention was directed because of the limited computational capacity of the brain. For a time attention was seen as one of the keys to the study of perception Neisser, 1967 (ref. 15). One of these early models by Cherry 1953 (ref. 16) led to the growth of work in attention using an information-theoretic model. For a review see Allport 1989 (ref. 17).

An information approach assesses the quality of planned data collection, by the expected decrease in uncertainty, which is equal to the expected increase in information. This can be set up using the familiar Shannon information. Roughly, the hope is that the method of data collection will leave the scientist with most information in the sample and consequently least information in the un-sampled data. This is explained in recent work by the author (ref. 18). Important work in cognitive science is by Oaksford et al 1994 (ref. 19).

Understanding that uncertainty or entropy is negative information it is not too large a step to extend this idea to real risk. A simple example explains this. When crossing a road we look towards the oncoming traffic. A careful risk analysis explains why. If we look the wrong way then we remain by the roadside because that is the best option whatever we see. If we look the right way then if a vehicle is coming then we wait and are no worse off then the other case. But if a vehicle is not coming we "gain: by being able to cross (and save time etc).

This should make clear that the gain in directed attention, smart data collection, is not simply that we "see" this risk but that we have the better

Potentiality for action to mitigate it. In fact this is very close in spirit to the control paradigm we mentioned above. In control we take an action now taking into account the future action. This is the basis of dynamic programming. It is just that observation or data collection is a special kind of action.

With this we can build a sound and practical framework for thinking about data collection and action in a single system. But their remains a problem the solution of which is precisely covered by the static phase. The problem is that until we observe the system passively we are unable to model it sufficiently to understand the sources of risk (metrics) and thereby build our information/risk model. To summarise our risk-directed data capture should be model-based. To complete the analogy if we are landed in a country and we do not know whether people drive on the left or right we do not know which way to look. Only by a prior stage of observation can we determine this (or ask where we are and draw on our knowledge).

We have stuck in the article with a two stage method, but of course we can continuously refine the model. Drawing on control such a system might be called adaptive. Even the two stage procedure is really adaptive, just a simpler version.

A more complex example can be taken from medical screening. A data collection decision in this context is when to screen. If it is too early there are few cases and needless cost. If it is too late then the medical and financial costs are greater. Again the time of screening is predicated on actual later action and complex consequent cost structures or risk.

Notice in the screening case this case the cost of data collection (say for all women in an age-band) is pertinent. The analogy with incident capture is the cost, for example of storing and using large data bases. This is a good analogue of the computational models in cognitive science. Finally, one can turn the data based issue onto itself. There is even a cost with "mining" the data. Thus the same principles which apply to raw data collection apply to mining. In fact data-mining for model building is already a form of directed attention. I want to test this hypothesis so I will draw that data.

Conclusion

 

We have tried simply to draw on a range of ideas in and around the subject of risk to focus the activity of incident capture.

In particular we have suggested that one must neither over conceptualise risk, that is to say make it too strategic, or under conceptualise it that is to make it less than tactical. The operational definition of risk is a complex task requiring a balance of qualitative and ethnographic data collection. This should have as its basis live incidents of precisely the kind incident capture systems uncover.

We suggested that incident capture suffers from known problems of data directedness and data volume. There are a range of possibilities which could be adapted to organise and lift it to a more active mode. Or rather to have a suitable combination of passive and active modes.

Data collection is a special form of decision making directed towards future action to control risk. Active definitions of data, and of risk, are essential clarifications to the varied settings of the world of clinical incidents capture.

References

1. Reason, J. Managing the Risks of Organisational Accidents 1998. Ashgate.

2. Battles, J.B., Kaplan, H.S. Van der Schaaf, T.W., Shea, C.E. The Attributes of Medical Event-Reporting Systems, Experience With a Prototype Event-Reporting System for Transfusion Medicine. Arch Pathol Lab Med – Vol 122, March 1998, pp 231 –238.

3. Kaplan, H.S., Battles, J.B., Van der Schaaf, T.W., Shea, C.E., Mercer S. Q. Identification and Classification of the Causes of Events In Transfusion Medicine. Transfusion, Nov/Dec 1998.

4. McCleod, I.S. Information and Advice: Considerations on their Forms and Differences in Future Aircraft Systems. In Proceedings of Global Ergonomics Conference, Capetown, SA,.Elseivier Science, Amsterdam 1998

5. McCleod, I.S. System Cognitive Function Specification: The Next Step. In Engineering Psychology and Cognitive Ergonomics Vol. 3 In press.

6. Satchell, P. Innovation and Automation 1998. Ashgate.

7. James, R.C. Blood Safety: A Conceptual Model. In Proceedings of Risk and Decision Policy Conference, Oxford 1997.

8. Crossland, R., McMahon, C.A., Sims Willis, J.H. Survey of Current Practice in Managing Design Risk. EPSRC Grant No. GR/L38745, 1998.

9. Fletcher, J., Arthur, J.G. Sutton, F., Szczepura, A. Minimal Access Therapy Training Initiative Evaluation, report prepared for the Dept. of Health Feb 1998.

10. Liang B.A. Legal Issues in Medical Care Risk Reduction. In Proceedings of Risk and Decision Policy Conference, Oxford 1997.

11. Arthur, J.G., Wynn, H., McCarthy, A., Harley, P., (1998) "Beyond Haptic Feedback: Human Factors and Risk as Design Mediators in a Virtual Reality Knee Arthroscopy Training System (SKATS). Engineering Psychology and Cognitive Ergonomics Vol 3. In press.

12. McCarthy, A.D., Hollands, R.J., (1998) A commercially viable virtual reality knee athroscopy training system. Medicine Meets Virtual Reality: 6, San Diego, U.S.A. P 302-308

13. Hotelling, H., (1947), "Multivariate Quality Control", in Eisenhart, Hastay and Wallis (eds) Techniques of Statistical Analysis, (New York)

14. Mac Gregor, J., (1994), "Statistical Process Control of Multivariate Processes". Preprints of the IFAC ADCCHEM’94 Conference on Advanced Control of Chemical Processes, May, Koyoto Japan.

15. Neisser, U. (1967) Cognitive Psychology. New York: Appleton –Century-Crofts.

16. Cherry, E.C. (1953). Some experiments on the recognition of speech with one and two ears. Journal of the Acoustical Society of America, 25, 975-979.

17. Allport, A. (1989) Visual Attention in Foundation of Cognitive Science. Cambridge Mass: MIT press (chapt. 16)

18. Sabestini, P., Wynn, H.P., (1998) Risk based Optimal Designs in Atkinson, Pronzato and Wynn (eds) MODA 5 – Advances in Model Oriented Data Analysis and Experimental Design.

19.. Oaksford, M. Chater, N., (1994) A rational analysis of the selection task as optimal data selection. Psychology Review. 101, 608-631.