Originally posted on December 7, 2014 @ 1:11 PM
Subjecting and Objecting About Risk
Rob’s new Book: “Following-Leading in Risk” is a MUST READ if you think you already know about Leadership. CHECK IT OUT
I read with amusement the constant projection of safety activities as objective. This is one of the great myths of behaviourism (and BBS), that the observer and safety expert is somehow neutral in any process. It seems that Pinpointing Hazards is neutral, as if hazards are not associated with risk or human judgment. Similarly, that ‘controls’ are neutral and objective in nature, just see the hazard and control it, as if human perception is not subjective or relative to context.
It is just as fascinating observing safety people argue about investigation methods on Linkedin as if investigation is objective and a neutral process. We see collected all these arguments about whether iCam is better than Taproot, 5 Whys is better than Fishbone, SCAT is better than Root Cause, Fault Tree is better than Tripod Beta and so on. Of course each method assumes a methodology (philosophy) and each method has a hidden anthropology. There is no discussion about the biases of the investigator, the biases of context, subjectivities of social constructs or any discussion about the presumed neutrality of investigation methods. It’s easy, just observe and report the facts.
We read this week that helmet sales in cricket shops have risen by 59% in the wake of the death of Philip Hughes. This story demonstrates just how human judgment, availability and recency bias (helped by media preoccupation) works. People make decisions about risk primarily on feeling (see further Slovic The Feeling of Risk) and a range of unconscious biases. Dumb Down Safety would have us believe that all decisions about risk are rational, neutral and objective.
Rapid response decision making (heuristics) is usually done without any form of rational thinking about trade-offs or by-products associated with that decision. Plenty of research has been undertaken to show that the wearing of helmets creates overconfidence and hubris, these attitudes in themselves being quite dangerous. The by-product of safety’s fixation with control is not the elimination of risk but the selective shifting of risk. Taleb calls this anti-fragility. The more safety drives fear, the more this will continue. The more safety drives the mindset of absolutes, the more delusional it will sound to the rest of the population. The more we buy the nonsense Bradley Curve of Du Pont, the more safety will believe that natural instincts are evil. The more we ‘buy’ the anthropological assumptions of the Du Pont Bradley Curve the less anyone will even come close to a journey to world class safety. The by-product of the trajectory of the Bradley Curve is: hiding reporting, training-as-learning, shifting risk, selective counting and safety crusading against the sin of natural instincts. There is no maturity in the anthropology of the Bradley Curve and without maturity there is no world class anything. Safety is not helped by its preoccupation with sexy but meaningless curves. See Sexy Curves and the Paradox of Risk.
In the case of the death of Philip Hughes he was not hit in the head but the neck and the resultant cranial bleeding was freakish, with a rare chance of recurrence. In the wake of this incident there have been calls to ban the bouncer, change the ball, bring in neck guards and a host of thinking anchored in non-rational decision making. This is the nature of human decision making about risk.
Whenever humans engage in unconscious response to risk, there are unseen by-products that we only learn about much later. This is the case with the response to the attack on 9/11. As people were shocked about 9/11 many shifted their transport preferences to travel by car and thereby increasing the road fatality and injury rate and cost of associated security.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3233376/
https://www.schneier.com/blog/archives/2013/09/excess_automobi.html
The recent video by Rob and Gab From the Couch introduces a few of the common biases that are natural to human fallibility. These biases and this fallibility is not a problem to ‘fix’. The idea that the nature of human fallibility (framed by safety as ‘error’) can be ‘controlled’ or ‘choice’ can be eliminated is simply determinist nonsense. Yet, safety continues to talk nonsense that ‘all accidents can be prevented’ and ‘safety is a choice’ in total denial of reality and humanness. No wonder people think safety is dumb. The trajectory of the determinist nonsense of dumb down safety is humans as robots and androids. It sounds a crazy as setting a goal for zero death.
So, there are a few things safety can do to tackle the reality of fallibility.
1. The first thing safety needs to do is accept the nature of accidents (Hallinan Why we Make Mistakes), a denial of fallibility and perfectionism are both delusional and mental health disorders (DSMV V).
2. The second thing is to reject the binary mindset that falsely links an acceptance of reality with fatalism. The less black and white safety gets, the more understanding will increase and judgmental crusading will decrease.
3. The third thing safety can do is drop the delusion of objectivity and work within the realities of subjectivity. The idea that people ‘choose’ to be unsafe is totally offensive and needs to be rejected. The judgmentalism by safety that everyone is an idiot and lacks common sense is also offensive.
4. The fourth thing to do is to reconnect safety to people and move away from all talk of objects and any neutral mechanistic talk about what safety is. Safety is not an engineering exercise but a human activity.
5. The fifth thing is think of decision making in the nature of by-products and resilience. We all share in this short trip on earth called living, fear of living and learning is fear of life itself.
In speaking about science and engineers Bateson (Angels Fear, p. 15) states:
‘Their well-intentioned efforts usually do as much harm as good, serving at best to make conspicuous the next layer of problems, which must be understood before the applied scientists can be trusted not to do gross damage. Behind every scientific advance there is always a matrix, a mother lode of unknowns out of which the new partial answers have been chiselled. But the hungry, overpopulated, sick, ambitious, and competitive world will not wait, we are told, till more is known, but must rush in where angels fear to tread’.
We learned much about the subjectivities of science from Laktos, Feyerbend, Polanyi and Kuhn in the 1960s.
Wouldn’t be good if safety could finally drop the mythology of ‘safety science’.
Do you have any thoughts? Please share them below