A Critique of Pure Reason – (With Apologies to Immanuel Kant)
There is not a week goes by without someone suggesting I should read James Reason or that I haven’t read James Reason. I am also advised that I don’t read James Reason properly because somehow all worldviews must be in agreement, particularly in safety. In the short history of safety, the works of Reason particularly, the Swiss Cheese metaphor (causation theory) and Human Error theory have become attributed as fact. Should one disagree with Reason’s theories one obviously can’t read or hasn’t read Reason’s work properly.
It is interesting to note the way that Reason is used to invoke reductionist and mechanistic worldviews (I don’t believe this is his view). Such an understanding of Reason is selective and ignores a full reading of Reasons’ work. For the purposes of this brief critique I will refer to a selection of Reason’s work to highlight this selectivity and how Reason is screwed by the mechanistic worldview as an apologetic for punitive and deficit safety.
The Swiss Cheese metaphor was first published by Reason in 1990 (yes it is 25 years old). Like all metaphors and models it is a tool (not THE tool) to articulate a particular understanding. Like all theories and metaphors, these often hide an undisclosed methodology (philosophy) and anthropology (view of what it is to be human).
I have previously discussed the eight schools of thought in safety (https://safetyrisk.net/understanding-the-social-psychology-of-risk-and-safety/), endeavoring to demonstrate that each safety school of thought is underpinned by a particular worldview and semiotic. The work of Reason, Hollnagel and others is not neutral, neither is mine. My bias emerges from a social psychological worldview (annals) and preferences a certain view of humans, relationships, community, dialectics and personhood. My preference finds the mechanistic, technicist and systemic preference in safety unhelpful for understanding human judgment and decision making in risk.
What concerns me most about the dominant metaphors and models of safety in circulation (pyramids, curves, cheese etc) is that they endorse a deficit view of humans and risk (https://safetyrisk.net/safety-curves-and-pyramids/ and https://vimeo.com/124273239 ). Back to Reason’s cheese.
Reason discusses the many paradoxes in the discourse of risk and safety (http://safetyhub.co.nz/wp-content/uploads/2013/09/Safety-Paradoxes.pdf). In this paper Reason deconstructs the reductionist myths associated with the mechanistic worldview pulling apart ideas that are commonly attributed to him, these are, that:
1. Safety is defined and measured more by its absence and presence.
2. Defences and barriers may protect but also cause unsafety.
3. Organisations try to constrain human variability to minimize error but this creates fragility to the unexpected.
4. An unquestioning belief in absolutes (zero) and associated semiotics, ‘impede’ the realization of safety.
Dekker has also deconstructed these same myths and beliefs in his book, Drift Into Failure (https://safetyrisk.net/getting-the-drift-dekker-on-safety/). Both Dekker and Reason argue that the better we understand these paradoxes, ‘the more likely we are to create and sustain a truly safe culture’ (Reason. 2000, p. 3). These same paradoxes are rarely recognized by safety people and trainers who use the Swiss Cheese metaphor to understand causation. The outcome is a safety mindset that is fixated on barriers and seduced by deficit models of prevention. Reason in this paper tackles this mindset head on. Reason (2000, p. 4) states:
‘An organisation’s safety is commonly assessed by the number and severity of negative outcomes (normalized for exposure), that it experiences over a given period. But this is a flawed metric for the reasons set out below.’
Reason then goes on to outline the ‘tenuous’ nature of measuring ‘safety health’ by negative outcomes.
Reason then demonstrates how barrier thinking that is focused ‘to enhance a system’s safety ‘can also bring about its destruction’. So, clearly Reason doesn’t seek to perpetuate ‘barrier thinking’. His discussion of trade-offs and by-products associated with ‘barrier thinking’ is helpful and is echoed by Amalberti (2013).
Reason deconstructs the myth, emerging from his work on human error and states(2000, p. 9):
‘Having mainly an engineering background, such managers attribute human unreliability to unwanted variability’ … What they often fail to appreciate, however, is that human variability in the form of moment-to-moment adaptations and adjustments to changing events is also what preserves system safety in an uncertain and dynamic world’.
In the same paper (2000) Reason then deconstructs the notion of target zero (a joint paper by Dekker and myself is yet to be released in Europe – Zero Vision and The Western Salvation Narrative). Reason (2000, p. 11) calls target zero the creator of ‘a negative production model of safety management’. Whilst not understanding the semiotic argument against absolutes and zero (semiotics is essentially sourced through social psychology and reason is not from this tradition but rather a cognitive psychologist), Reason pulls apart the binary discourse of zero stating (2000, p. 11):
‘An unquestioning belief in victory can lead to defeat in the ‘safety war’. (not the language I would use, as military metaphors also endorse binary and simplistic views of safety).
In another paper publish by Reason (and Holnagel) Revisiting The Swiss Cheese Model of Accidents (2006), Reason deconstructs the Swiss Cheese model (2006, p. 2) so as to ‘hopefully prevent overly dogmatic implementations’. In this paper Reason shows his own historical evolution in the model. In some ways it could be argued that Reason, like many others has mellowed and become less mechanistic with age. It is not uncommon to talk about the philosophy and work of the young and old Augustine, young and old Luther, young and old Mozart etc. I think (following the guide of Gail Sheehy in Passages, Predictable Crises in Adult Life) we could easily talk about the young and old Reason. It is not hard to see Erikson’s model of life stages (http://www.intropsych.com/ch11_personality/eriksons_psychosocial_stages.html) in Reason. An examination of Reason’s work and model changes in his books (1997-2008) supports this reasoning (sorry for the pun).
In Reason’s paper (2006) he explains that the Swiss Cheese is ‘only a model’ and can serve different purposes. He states (p. 9):
‘The SCM is a heuristic explanation device for communicating the interactions and concatenations that occur when a complex well-defended system suffers a catastrophic breakdown’ … In this regard it has proved very successful. It is a simple metaphor – easily remembered and passed on’.
Reason lists a number of criticisms of the Swiss Cheese in the paper (2006) including critique by Dekker. Reason declares ‘it was never intended to be a scientific model’. He particularly rejects the idea, and imputes cause to the engineering worldview, that his model should be used for blame or systemic root causation theory. Reason (2006, p.15) views the Swiss Cheese simply as a ‘model for communication’. He explains that he has revised the model of Swiss Cheese in response to tackle the delusion of barrier thinking (p. 17). An excellent research project on the weakness of the Swiss Cheese model has been conducted by Perneger (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1298298/) entitled The Swiss Cheese Model of Safety Incidents: Are There Holes in The Metaphor.
In his book, The Human Contribution (2008) he has added the image of a mouse gnawing at the cheese (p. 190) as a way of deconstructing absolutist and ‘scientific’ interpretations of the model. Interestingly, Reason concludes the book, like Dekker (2011, p. 121ff) with a call to understand resilience (pp. 237ff). Unfortunately, Reason, Dekker and Hollnagel associate the semiotic of ‘engineering’ with notion of resilience.
In the Eurocontrol paper (2006) Reason deconstructs the reductionist view of accident causation (p. 17) including distancing from Heinrich’s Pyramid and states (p. 18):
‘Adopting this view clearly defeats conventional accident models, according to which accidents are due to certain (plausible) combinations of failures.’
The view Reason discusses here is the anti-reductionist worldview that, seeks to scientificise his model and use it as support for fault tree analysis (p. 18). Like Dekker (2011, p. 155ff), Reason recognizes the problem of ‘emergence’ and the importance of understanding ‘confluences’ that occur. Although neither Dekker or Reason state such, much of their discussion (eg. about paradoxes) infers that safety is a Wicked Problem (https://sia.org.au/downloads/News-Updates/Safety_A_Wicked_Problem.pdf). Unfortunatley, neither Reason nor Dekker tackle the semiotic nature of their own discourse with the use the language of ‘human error’. Interestingly, Hollnagel in the Foreword to Reason’s latest book, A Life in Error (2013) disowns the notion of ‘human error’ (‘Whether one likes the term ‘human error or not – and I must admit to be one of those who would like to see it wither’).
Unfortunately, the discourser of ‘human error’ perpetuates the myth that somehow we now understand the logic of human decision making. The whole discourse of ‘unsafe acts’, ‘unsafe conditions’, ‘violations’ and ‘failures’ too has evolved in Reason but overall is unhelpful. The discourse of ‘human error’ has led to flawed methodologies such as seeking out ‘damaging energies’ or ‘hazard hunts’ as if human social arrangements (social psychology) have little to do with decision making. Reason in The Human Contribution (2008) begins to call for a sense of balance in the interpretation of this ‘defensive’ deficit model (p. 102ff). He advises: ‘they too have their limitations when taken to extremes’.
So, the purpose of this article has been to raise the way Reason has been mis-attributed and misinterpreted. It is clear that Reason struggles with the way his model is being used. In his writing he resists absolutes and the way his model has been applied absolutely. Despite this he comes from the background of cognitive psychology and we should know the foundations of that discipline (https://en.wikipedia.org/wiki/Cognitivism_%28psychology%29 ). I don’t share the assumptions of this worldview nor it’s trajectory.
There question now is; is there something that moves us forward in risk and safety beyond the constraints of the reductionist/engineering view embedded in the Swiss Cheese and human error model? I would suggest there is, but not in the space generally provided in a blog. There are many other ways of looking at the challenges of safety other than through process, systems of zero.
Sources
Amalberti, R., (2013) Navigating Safety, Necessary Compromises and Trade-Offs, Theory and Practice. Springer, France.
Dekker, S., (2011) Drift Into Failure, From Hunting Down Broken Components to Understanding Complex Systems. Ashgate, Surry.
Perneger, T., (2005) The Swiss Cheese Model of Safety Incidents: Are There Holes in The Metaphor. BMC Health Serv Res. 5:71.
Reason, J., (1997) Managing the Risks of Organisational Accidents. Ashgate, Surry.
Reason, J., Hollnagel, E., and Paries, J., (2006) Revisiting the Swiss Cheese Model of Accidents. Eurocontrol, Brussels.
Reason, J., (2000) Safety Paradoxes and Safety Culture, Injury Control and safety Promotion. Vol 7. No. 1. Pp. 3-14.
Reason, J., (2008) The Human Contribution. Ashgate, Surry.
Reason, J., (2013) A Life in Error. Ashgate, Surry.
Sheehy, G., (1976) Passages, Predictable Crises in Adult Life. Bantam Books, New York.
Do you have any thoughts? Please share them below