Works in theory but not in practice? Some notes on the precautionary principle.
Kenneth Calman1 and Denis Smith2
1 Professor Sir Kenneth Calman is the Vice Chancellor of the University of Durham. Until September 1998 he was Chief Medical Officer for England and Wales
2 Professor Denis Smith is Professor of Management at the University of Durham. From the 1st April he will be Professor of Management and Head of the Centre for Risk and Crisis Management at the University of Sheffield.
Notions of risk and probability have become dominant constructs within many of our discussions about modern living (See Beck, 1990; Erikson, 1995: Giddens, 1988) and yet they still prove elusive but emotive issues for policy makers to deal with (Smith and Toft, 1998). We should not be surprised therefore at the high level of concern that people have over the range and nature of the hazards that we face. In some respects, different generations have expressed similar concerns about hazard, albeit in different contexts and set against different standards of acceptability. For example, current debates about the risks associated with BSE and genetically modified foods can be seen to provide an echo of earlier debates on both pesticide use (Carsen, 1962) and cancer risks (Steingraber, 1998). The literature on risk provides us with constant warnings of our plight from such hazards as global warming, toxic waste, nuclear radiation, genetically modified organisms and pesticides. What prompts such pessimism within the human condition, on the one hand, and yet allows such dire warnings to go unheeded on the other is a central component of risk debates. The origins of this paradox lie in a number of factors including: the voluntary – involuntary nature of the risk, the sense of helplessness felt by potential victims of the hazards, delayed versus immediate effects of any exposure and the manner in which the hazard (and uncertainty surrounding it) is communicated to those affected. At the heart of risk debates often lies the central role of expertise and the prior knowledge that organisations and expert bodies had of the hazard.
It is clear from the range and ferocity of hazard debates that the various protagonists often take fundamentally different stances regarding the nature of the problems and their potential solutions. Indeed, some have argued that it is difficult to find a mutually accepted solution to a problem when it is clear that the various actors in the debates define the problem in quite different ways (see, Weick, 1988; 1993; 1995; Weick and Roberts, 1993). What invariably frustrates policy makers is the manner in which groups can express concern over one set of hazards whilst, at the same time, exposing themselves to potentially greater hazard through other activities.
The history of policy initiatives for risk management is littered with attempts to "educate" the public or, more recently, to "communicate" the risks more clearly to them in the hope that they will modify their behaviour accordingly. However, this approach might be seen to create a setting in which the views of experts are given greater weight than those of the groups who are exposed to the hazard. More recent research, however, sees the determination of hazard (and, by implication its acceptability) as a negotiated process between protagonists in the debates. This is particularly the case in those situations for which there is little or no a priori evidence for both the probability and consequence associated with the hazard. Inevitably, conditions will arise when it is not possible to find a compromise position between the protagonists, and the state (operating within its regulatory function) will need to intervene in order to prevent an activity from taking place. This use of a "precautionary principle" raises a number of key managerial and research issues. For example, to what extent should the state curtail the use of certain products or prevent certain activities when the hazards themselves are unclear and causal relationships are blurred? Put another way, do or should organisations act with sufficient caution in decisions which have a high implicit hazard potential but where the probabilities of occurrence are low or unknown?
Against a background of high profile public health scares, it is not surprising that the concept of risk has attracted increasing interest over the last few years. As part of this discussion, the term "precautionary principle" has been widely used to justify regulatory actions taken to protect the health of the public. The term, however, may be used in different contexts and with different implications. Despite its current popularity amongst policy makers, the concept has also attracted severe criticism. For example, Furedi (1997) has claimed that the principle "suggests that we are not merely concerned about risks, but are also suspicious of finding solutions to our predicament" (p. 9). Herein lies an apparent paradox. How do we ensure that our ‘solutions’ to problems do not in themselves create even greater hazards? This has created a sense of insecurity, which Furedi (1997) argues is typified by the number of health scares witnessed in recent years. If we are overly cautions in our approach to risk, then it is conceivable that our inaction may generate more significant hazards than the activity that we are trying to manage. Ultimately, this becomes a debate about the acceptability of a portfolio of risk, which is by definition, more of a socio-political problem rather than a technical-scientific matter. The issue here centers around full disclosure and informed consent and requires that an active attempt is made by the risk generators to communicate in an open and effective manner.
The purpose of this paper is to explore the central dynamic of these relationships by reference to the notion of the precautionary principle and its role within public health policy-making. In particular, it seeks to address the following issues. Firstly, it seeks to clarify the concept of the precautionary principle within health and healthcare policy making. Secondly, it attempts to examine the central role of expertise within risk debates Thirdly, it attempts to establish a framework within which policy making could reflect more the core elements of the precautionary principle. Finally, the paper aims to encourage further debate on the framework proposed, specifically as it applies to health and healthcare.
The Nature of Risk
There is little doubt that "risk" is a much misused and misunderstood construct. At its simplest, we can see it in terms of the probability that a given hazardous event will occur and that this event will have consequences which are deemed to be negative by some, or all, of those who are exposed to it. In such a context, it might be prudent to constrain the activity that gives rise to the hazard, especially if there is a lack of clarity over causality. Where the issue becomes blurred is in terms of those events in which both active intervention and in-action can give rise to a hazard. At what point here do regulators deem an intervention to be justified and what form should this intervention take? At it's crudest, this becomes a ‘burden of proof’ debate, with those who can muster the greatest evidence likely to win the argument. At its most complex, it becomes a socio-political conflict over the acceptability of risk, even under those conditions where the nature of the risk is not proven. This raises and important question namely, to whom and over what time period is this risk (i.e. probability x consequence) acceptable? The acceptability of any risk will vary according to the derived benefit obtained by those who assume the risk, the range of alternatives available to them, the immediacy of the harm/benefit associated with the risk and the possibility of further corrective action to mitigate the consequences of the action. If we add to this, the mediating role of expertise (in providing an evidential basis for the decision) and the inherent uncertainty in those cases where a precautionary principle is usually applied, then it is clear that there is considerable scope for ambiguity and conflict concerning the development of any precautionary policy.
Part of the problem here centres on the "trans-scientific" nature of certain problems (Weinberg, 1972) in which it is held that the issues go beyond the abilities of science to prove. It is the inability to predict the ‘emergent properties’ associated with processes, products and systems that creates the problems involved in managing the risks associated with such emergence (see, Fortune and Peters, 1995). Within any scientific analysis, the notion of proof is a central underpinning concept. Without proof, decision-makers invariably factor other issues and considerations into the process. A central element in this process is, therefore, the notion of expert opinion. Further problems are caused by the weak evidential basis for determining emergence, causal relationships and their associated consequences, especially in those cases where there is little a priori data. For delayed-in-effect consequences (associated with the exposure to a substance where the consequences are only negative) one might want to ignore expert opinion and apply the precautionary principle as a matter of course. Clearly in these cases, some form of "harm" is the only certain outcome and policy makers might therefore ignore any derived benefits. However, some would argue that the power of interested parties can be an important dynamic in shaping the manner in which the precautionary principle is applied. This claim is given weight, for example, by the ability of the powerful tobacco lobby to resist stricter controls during the period in which the risks from smoking have been made known. This clearly raises questions of an ethical nature which can be contextualised within the broader literature on medical and business ethics. Put simply, the precautionary principle is akin to Sethi’s (1975; 1983) notion of Corporate Social Responsibility in which organisations seek to anticipate the likely social demands placed upon them or, in areas of uncertainty, the possible impact of their actions on current and subsequent generations. To be effective therefore, the precautionary principle requires either an effective anticipatory regime of enforced regulations or fundamental changes made to the core beliefs, values and assumptions of organisations. Both strategies bring with them problems of efficiency and effectiveness. Regulation can only be effective if a strong programme of enforcement and appropriate penalties for violations accompanies it. Changes in the belief and value systems of an organisation are, without doubt, an effective means of ensuring that the precautionary principle is used as a matter of course. However, this is invariably a process that takes time to develop and which may fail if there are changes in the senior staff of the organisation. These are important problems of policy implementation, which demand attention from those who seek to introduce the precautionary principle.
The Precautionary Principle
By invoking the precautionary principle (as it is currently conceptualised) we generally envisage a move towards a situation where legislation is enacted to protect sections of society. This legislation may generate a situation which is:
However, one might also question the long-term effectiveness of such legislation as a means of controlling risk. Invariably, legislation often lags behind both social norms and scientific knowledge. For example, smoking tobacco (as a known hazard) could have been banned within any public spaces many years ago on the basis of our understanding of the risks of secondary smoking. A second failing of legislation is that it invariably exists as a process of negotiation. All too often, the potential victims of the hazard have little, if any, input into that negotiation. One only needs to examine the development of environmental regulation to witness the impact that large corporate concerns can have on the regulatory process, both in terms of delaying legislation or reducing its severity (see, for example, Smith, 1990). Without doubt, the most effective way of dealing with issues of hazard is to ensure that the generators of that hazard behave in an ethical (or socially responsible) manner. Herein lies the paradox. Invariably, it was a failure of organisations to curtail their hazardous activities that resulted in the legislation in the first place. Ensuring that the necessary behavioural shifts take place within organisations and that these are sustained is, perhaps, one of the most difficult and challenging of managerial issues surrounding the use of the precautionary principle.
Given these issues, what then is the role and significance of the precautionary principle? How does it operate in practice? Why in some instances do we legislate and in others only give advice, hoping that organisations and individuals will change their behaviour? If society’s core purpose is to protect the health and well-being of the population as a whole from some risk or threat to health, then society also needs to decide on the nature of the precautionary principle framework within which it is to operate. Policy makers need to identify which risks should come under the purview of the principle and which should not. Of particular importance here is the clear articulation of the reasons why certain forms of hazard would be exempt from the principle. The manner in which we communicate concerns about hazard is important (see, Committee on Risk Perception and Communication, 1989) as is the ethical stance that we take with regard to issues of full disclosure and informed consent. It is this aspect of the problem which lies at the heart of the decision making process and which raises the key questions within risk debates. Who should decide? On what evidence? How can potential victims object to the hazard and what happens if they disagree with the mediated response? The following sections try to clarify some of the issues involved and then suggests a way forward.
Making sense of hazard: the role of expertise
"….it is hard to make common sense when each person sees something different or nothing at all" (Weick, 1993, p.636).
It could be argued that one of the primary functions of a manager is to cope with uncertainty. If there were no uncertainty in life, there would be no need to make decisions and therefore, little need for managers. When dealing with issues of hazard, emergence and complexity, this process of managing uncertain outcomes assumes a position of critical importance. Given the role of risk within the management process, one might expect that it would occupy a position of great importance at the strategic levels of the organisation. In reality, however, risk is often relegated in importance in organisations, with managers showing greater interest in short term benefits rather than long term harm. The principal issue invariably becomes one of the centrality of technical expertise, especially when we have little a priori data upon which managers can base their decision-making. For delayed-in-effect hazards the problem becomes further blurred as cause and effect relationships are not always clearly defined or, in some cases, understood. It is here that the precautionary principle should assume a critical role and yet it is often neglected in favour of expert opinion.
There has been considerable discussion relating to the role of experts within policy-making (see, for example, Collingridge and Reeve, 1986; Blowers, 1984) and, in particular, for issues of hazard (Draper, 1991: Irwin and Wynne, 1996; Irwin 1995; Smith, 1990). Collingridge and Reeve (1986) have argued that the use of institutionalised science within scientific conflicts may not lead to a resolution of that conflict. Indeed, they have gone so far as to say that in highly emotive debates the use of science by all parties to the conflict will merely heighten the intensity of that debate. Ultimately, those who can bring the greatest power (and influence) to bear on the debate are likely also to be able to exert control over the scientific arguments as well. This is especially the case where the trans-scientific nature of the problems ensures that there is no clear burden of proof, almost irrespective of the "quality" of the scientific expertise used. In examining debates involving major hazard sites at Canvey Island, Smith (1990) suggested that the success of scientific interventions may well be determined by the economic power exercised by protagonists in the conflict. What emerges from this body of literature is the need to ensure that the regulatory agencies are not unduly influenced by "the national interest" and that they are sufficiently precautionary in the approach to trans-scientific problems of hazard.
Risk, Precaution and the Management of Hazard
A central facet of risk debates is the notion of causality – more specifically, who can we blame for the realisation of the hazard? Once again, the attribution of causality sees the expert occupy a central role. Invariably, the concerns of the public centre around the consequences associated with a hazard whereas the "expert" community may well focus their attention on the probabilistic component of risk. In both cases, attention is focused on causal relationships, although the policy outcomes associated with each perspective may be different. However, clear causal relationships can break down when we move our research out of the laboratory into the "real world" (see Neisser, 1980). Debates concerning risk do not take place in the objectively driven laboratory, but in the value-laden ambiguity of a societal context where notions of categorical proof are elusive (Weinberg, 1972). Consequently, we might not expect science to provide a clear solution to these problems. This, in turn, has an impact upon policy making and requires that greater caution be employed in those circumstances where there is a high level of hazard and uncertainty. The purpose of invoking such a precautionary approach is to prevent a negative event from occurring which might be harmful to health. This intervention may be achieved either by restricting behaviour or by changing patterns of behaviour so as to minimise exposure to the hazard. It is here that the application of the precautionary principle should be straightforward and yet it is also here where it seems to cause the greatest difficulties. In particular, the use of the principle raises a fundamental question. At what point does an organisation or the State assume a moral right to impose such changes on a wider population, when the evidential basis for the decision is weak? Health care provides a number of examples for the discussion of the principle as it is used in practice.
Most surgical operations are associated with risk, yet, except in a few instances (cardiac and liver transplantation) there are few major restrictions upon the actions of those carrying out the procedure. Indeed, the whole practice becomes contextualised within the notion of professional judgement and ethical behaviour. In the absence of strict protocols, clinicians work within the boundaries of what they deem to be their professional judgement and this has raised quite important questions about risk within clinical governance and evidence-based health care (see, Smith, 1998; Treasure, 1998; Walshe and Sheldon, 1998). Pharmaceuticals, on the other hand, are different. Here, the Regulatory Authority has the power to license, or de-license a drug. The latter may be done on a "precautionary basis" because of possible side effects or adverse consequences. The question here is whether the drug should be licensed or removed from the list of prescribable drugs. Thus the use of the drug may be banned completely, or it may remain prescribable. In both cases, it is done in the knowledge that it does have adverse consequences, and therefore when it is used it must be with the patient's informed consent. In those cases where the drug is used, the benefit is seen to outweigh the disadvantages. Much of the debate in such cases is on the nature and strength of the evidence available and around the nature of informed consent. At one level, this distinction may seem bizarre. However, it is clearly a reflection on the nature of an acceptable level of risk. Within non-elective surgical operations there may be a high probability of an adverse consequence, but this has to be set against the likely consequences of no intervention (which may result in a more chronic form of hazard) and the extent of the impact on the wider population (usually limited to a small number of individuals at any one point in time). In contrast the use of pharmaceuticals may adversely affect a considerable percentage of the population and its effects may be delayed in effect, thus adding a spatial and temporal dynamic to the problem. Under these circumstances, one might argue that the use of the precautionary principle is more straightforward and yet this issue is also clouded by problems of informed consent and evidence-based policy making, making it a burden of proof debate.
A brief consideration of some examples shows that the current operation of the principle is complex. For example, invoking the principle does not seem to be related to the level of risk. If this were the case then cigarettes would be banned. Nor does it seem to be related to the numbers of people involved. If this were so then the very small number of people with peanut allergy would not receive such "disproportionate" attention. Nor does it seem to be related to the need of the majority of people. If this were so then the 25% of cigarette smokers in this country would not be allowed, in many instances, to pollute the environment and cause illness in the majority. The freedom of choice of one person inevitably seeks to override the freedom of choice of another. In order to frame the boundaries of this debate it is possible to identify three major issues, which seem to be evident when thinking about the precautionary principle. These can best be expressed in the form of questions.
1. What is the level of certainty of the risk, and how strong is the evidence?
This is a key issue within all risk debates. Should action be taken on the basis of an "hypothetical" risk, for which sound evidence is not available, even though the risk is plausible and has been identified by someone? What if two or more reputable groups or individuals dispute the evidence? What if a single-issue group takes up a particular cause and cannot be convinced by the evidence and expert opinion of the level of risk? This is a time when careful and dispassionate consideration of the evidence is required and such consideration should include non-experts in the discussion (see Sheldon and Smith, 1992).
There is a further and crucial aspect of this discussion, and that relates to the level of uncertainty of hazard. This is probably the most difficult situation of all. In this case there seems to be a real hazard, but it is unclear what the probabilities of its occurrence or the consequences associated with it might be. There is no way of knowing, other than by carrying out more research or waiting a period of time. Indeed, many hazards are not quantifiable and the human effects of exposure are just not known. In such an instance, invoking the precautionary principle is a matter of very difficult judgement and may have profound consequences for little or no benefit.
The role of expert scientific committees might be raised here. Their function is to critically examine the evidence available and to determine the nature of any possible risk. Government appointed expert committees are composed of independent scientists and non-scientists and are an important part of the decision-making process. They may wish to suggest action or make recommendations. The Government may accept the advice or reject it. However, it should be emphasised again that is only one part of the decision making process, other implications will also be considered by ministers. The process is ultimately a political one and the problem occurs when it is portrayed only as a scientific (and therefore rational) process.
2. Does the individual have any choice in whether he or she is subjected to the risk?
Once again this is a crucial question within a democratic society. It might be argued that if one knows the nature and probability of the hazard, and can make an informed choice about it, then there would be no need to invoke the precautionary principle. In general this is the case. For example, the evidence that exercise improves health is clear but there is no case for legislation to make keep fit classes compulsory. In a similar way, if a pharmaceutical product is associated with adverse effects it might be sufficient to ensure that the professionals concerned have sufficient knowledge to use the drug appropriately, with the patient's informed consent, without regulation being required. But can this generally be said to be the case? For example, in some instances additives are put in foods to preserve them without effectively communicating that fact to consumers, and some foods are withdrawn from all customers because of a known risk, thereby removing choice for the majority. Whilst in most instances, the ability to choose makes the use of the precautionary principle unlikely, this is not universally the case. In particular, it may still be invoked when there is considerable uncertainty about the level of risk. It should be noted that choices made by patients or the public about risks to health cannot always be readily predicted. Much depends on the perception of risk and the manner in which it is communicated. This has a major impact on the nature of informed consent and its effectiveness (see, Treasure, 1998).
3. What is the magnitude and acceptability of the risk being considered?
While this is related to the first question, it has a separate locus. Assessing the magnitude of a hazard always involves an element of judgement and this has three components: the seriousness of the hazard, the immediacy of harm, and the number of people affected. These three factors operate independently but can be synthesised to give an estimate of the magnitude of the consequences. For example, suppose that we know that a chemical does have a known level of risk, let's say one person in one thousand is involved. The effect is serious as it causes a disabling condition, but the number of people exposed is small, only a few hundred. What would the significance of the risk be in this circumstance? This is then complicated by the fact that choice of known pathways of exposure to the chemical may, or may not, exist. However, if those exposed to the hazard are not made fully aware of its dangers then the issue becomes further complicated. Ultimately, a decision has to be made about the acceptability of the risk. The question is who should make that decision?
Making the Decision
Having briefly considered these three questions, the final step is to make a choice between alternative options. Has the evidence, the question of choice and the magnitude of the risk persuaded policy makers that there is sufficient concern to take action? Perhaps it isn't worth making a fuss about something that may have little impact on society for an activity that has enormous benefits? But who is to judge? It is this question, more than any other, which proves to be contentious. What right does an organisation, or the state itself, have to knowingly expose individuals to risk without their fully informed consent? In essence, the critical phrase is "fully informed consent". How can we have informed consent when we do not fully understand the nature of the hazard and its probabilities?
Ultimately, those who legislate have to come to a decision about a particular problem based on all of the information available to them, which in most instances should be more than the scientific evidence. As has been said, this will invariably be a matter of judgement and perhaps what matters most is the transparency of both the approach to the decision and the evidential base upon which the judgement is made. This must include access both to the scientific evidence (for and against) and the nature of any other views expressed. A failure to provide such open access will make the decision process appear one sided.
Answering the three questions proposed earlier in this paper should go some way to reaching a more logical and reasoned decision-making process over issues of hazard. However there remain three problems which can arise and which need further discussion and research.
1. Continuing discontent. It not infrequently happens that an issue is raised, is fully debated, all the evidence assessed and a decision made which is still not to the liking of those who originally raised concerns about the matter. For example, a chemical is considered to be dangerous, and harmful to human health. It is fully assessed and found not to have the level of harm suggested and is given the "all clear". Yet the criticisms persist. Are they right? Should the debate be opened again? Is a conventional approach to the use of expertise missing something? Is the uncertainty in the judgement too great? There is no simple answer to these questions, and they are likely to be very topic specific, but it is suggested that research needs to be carried out to determine a way of taking this forward.
2. The timing of the decision. For a new procedure or chemical, there should normally be time to make decisions about the action to be taken, though this may not always be the case with an infection or acute toxic problem. However, decision taking is even more difficult if the source of the hazard is already in use. In some instances the evidence is clear and an immediate ban is obvious and welcomed. But suppose the evidence is uncertain or even disputed, what should happen then? Do we wait until more evidence is available? In circumstances of choice can we rely on the information being readily available and acted on? How long should we wait before making the decision or taking steps to generate more research? Once again there is a need to consider the three issues concerning the nature of evidence, choice and probability/magnitude of hazard before proceeding to take action to protect the health of the people in a compulsory way.
3. The role of the scientific expert or committee in policy making. In this country we are very fortunate in having available scientific expertise of the highest calibre. However, as has been discussed here and elsewhere (see, Collingridge and Reeve, 1986; Smith 1990), decisions on risk are often made on the basis of factors other than scientific ones. In addition scientists may feel uncomfortable in taking the uncertainty presented by complex and difficult science and converting it into policy with confidence and assurance. Yet the public may need a clear statement of the position if the precautionary principle is to be invoked (or not) and legislation enacted to back up the decision.
The role of the scientific committee is thus to set out the evidence, including the uncertainty, and for policy makers to make the final decision. Senior advisers and Government Officers, such as chief scientists and chief medical officers provide the bridge between the two, interpreting and evaluating the science and the policy. But what happens if there is a disagreement between scientists and policy makers? In many instances this will not matter, after all these are issues of judgement and there is room for differences of view. In some instances however, the differences will be so great (it is considered that public health will be compromised) that it is untenable for the scientist or adviser to remain in post and resignation is the only answer. This is a rare occurrence, but is a necessary part of the procedure.
Finally to return to the question of who should make these decisions. If it is accepted that invoking the precautionary principle leads to legislation, then this cannot be left to scientists or special interest groups. It must be seen to be open and fair and part of the democratic process. It is the responsibility of the legislature, the policy makers and politicians. Those who advise must present the arguments impartially and clearly, mustering the evidence available and indicating where it is weak or uncertain. It should be remembered that the purpose of this whole process is to protect the health of the public.
Beck, U. (1992) Risk Society: Towards a new modernity. London: SAGE.
Blowers, A. (1984) Something in the Air. London: Paul Chapman Publishing.
Calman, K.C. (1996) Cancer: Science and Society and the Communication of Risk; British Medical Journal 1996; 313 :pp. 799-802
Carson, R. (1962) Silent Spring.
Collingridge, D. and Reeve, C. (1986) Science Speaks to Power. London : Francis Pinter
Committee on Risk Perception and Communication (1989) Improving Risk Communication. Washington DC: National Academy Press.
Draper, E. (1991) Risky Business. Genetic testing and exclusionary practices in the hazardous workplace. Cambridge: Cambridge University Press.
Erikson, K. (1994) A New Species of Trouble: Explorations in Disaster, Trauma and Community. New York: Norton.
Fortune, J. and Peters, G. (1995) Learning from Failure: The Systems Approach. Chichester: Wiley.
Giddens, A. (1990) The Consequences of Modernity. Cambridge: Polity Press.
Irwin, A. (1995) Citizen Science. London: Routledge.
Irwin, A. and Wynne, B. (Eds.) (1996) Misunderstanding Science? The public reconstruction of science and technology. Cambridge: Cambridge University Press.
Neisser, U. (1980) "On ‘social knowing’". Personality and Social Psychology Bulletin, 6, pp. 601-605.
Royal Society (1992) Risk : Analysis, Perception, Management; London : Royal Society
Sethi, S.P. (1975) 'Dimensions of corporate social performance: An analytical framework', California Management Review 17(3) pp. 58-64.
Sethi, S.P. (1983) 'A strategic framework for dealing with schism between business and academe', Public Affairs Review, 1983, pp. 44-59.
Sheldon, T.A. and Smith, D. (1992) Assessing the Health Effects of Waste Disposal Sites: Issues in Risk Analysis and some Bayesian Conclusions in, Clark, M., Smith, D. and Blowers, A. (Eds) (1992) Waste Location: spatial Aspects of Waste Management, Hazards and Disposal. London : Routledge. pp 158-186
Smith, D. (1990) "Corporate Power and the Politics of Uncertainty: Risk Management at the Canvey Island Complex". Industrial Crisis Quarterly, 4 (1) pp.1-26
Smith, D. and McCloskey, J. (1998) "Risk Communication and the Social Amplification of Public Sector Risk". Public Money and Management, 18 (4) pp. 41-50
Smith, D. and Toft, B. (1998) "Issues in Public Sector Risk Management". Public Money and Management, 18 (4) pp. 7-10
Smith, R. (1998) "Regulation of doctors and the Bristol inquiry. Both need to be credible to both the public and doctors", British Medical Journal, 317, pp. 1539-1540.
Steingraber, S. (1998) Living Downstream: an Ecologist looks at Cancer and the Environment. London: Virage
Treasure, T. (1998) "Lessons from the Bristol case. More openness – on risks and on individual surgeons", British Medical Journal, 316, pp. 1685-1686.
Walshe, K. and Sheldon, T. (1998) "Dealing with Clinical Risk: Implications of the Rise of Evidence-Based Health Care" Public Money and Management 19 (4) pp. 15-20
Weick, K.E. (1988) "Enacted sensemaking in crisis situations" Journal of Management Studies, 25, pp. 305-317
Weick, K.E.. (1993) "The collapse of sensemaking in organizations: The Mann Gulch Disaster", Administrative Science Quarterly, 38, pp. 628-652.
Weick, K. (1995) Sensemaking in organizations. Thousand Oaks: Sage Publications.
Weick, K.E. and Roberts, K. H. (1993) ‘Collective minds in organizations: Heedful interrelating on flight decks’, Administrative Science Quarterly, 38, pp. 357-381.
Weinberg, A.M. (1972) ‘Science and Trans-science’. Minerva, 10, pp. 209-222.