The Triarchic Mind, Risk and Safety
So much of how we think about decision making is centred on the brain. The common misconception is that the brain is a mainframe computer that directs actions. This is how binary reductionist thinking operates. The human will is centred here in the brain, that way ‘safety is a choice you make’. Unfortunately, that’s not how humans make decisions. The reductionist tradition is simply a construct attributed to a worldview founded in Descartes and Newton as Dekker so clearly illustrates in ‘Drift into Failure’
Human thinking is not just fast and slow, it is not binary and certainly not black and white. Norrtrander’s research in ‘The Use Illusion’ makes it clear that the unconscious travels at least 10 billion bits a second and that rational mind travels at 10 bits a second. Rational thought is slow and non-rational thought is superfast and then there is everything in-between. We know about this unconscious activity first hand through our dreams, daydreams and ideas that seem to rush into our mind without control. The mind is very different from the brain.
It seems somewhat scary to some that there are things beyond our control. If there is any word that dominates the safety industry it is this word ‘control’. What distresses safety most of all is when things happen and all the controls in place didn’t work. We would do well to take a ‘Brief Tour of Human Consciousness’ with Ramachandran.
The history of technology is also the history of complexity. We are now so interconnected that one trigger can create an avalanche of unintended by-products. Networks are so complicated that computers can now put the stock market in a tail spin in seconds and minutes before a human can intervene. How strange that business is driving for autonomous driving cars when the thing that drives chaos in the market is computers in automaticity. So in dissatisfaction with human automaticity we are simply shifting automaticity to computers. The fundamental agility of human judgment and decision making can never be overtaken by a computer unless that computer has all the autonomy of a human. Somewhat of a contradiction really.
Of course the real enemy of the self-driving car is the human. It seems the adaptability and unpredictability of human life is something that cannot be sustained by pure rule compliance (http://www.safetydifferently.com/rules-who-needs-them/). The challenge is, how can we give self-driving cars the mind of human decision making. How can we solve the apparent weakness of human automaticity by making computers more human and automatic???
All of the religions in the world understand that human decision making is much more than brain activity. All in some way acknowledge that humans have at least three centres of mind not one. I illustrate this often in my presentation on One Brain Three Minds (https://vimeo.com/106770292) and in the illustration attached.
One Brain Three Minds from Human Dymensions on Vimeo.
This idea of three or more centres of decision making was discussed recently in the New Scientist . Sternberg also talks about this in his educational work ‘The Triarchic Mind’ (https://en.wikipedia.org/wiki/Triarchic_theory_of_intelligence ).
The reality is that a host of non-rational factors influence decision making, especially how humans organize and through social arrangements. It seems at the time that many of these things ‘emerge’ and are beyond control. Mostly high risk things emerge in the workplace because safety is consumed with a binary and behaviourist attribution to the management of risk. The idea of human and social holism is completely missing in the discourse of the Safety industry. Culture is not understood as ‘the collective unconscious’ (Jung https://en.wikipedia.org/wiki/Collective_unconscious ) but rather as the assembly of rational decisions based on values, habits, systems and behaviours. If we want people to be more risk intelligent we have to step beyond this rationalist binary projection as an explanation as why people do what they do.
Humans are not just the sum of inputs and outputs. Decision making is not just something controlled in the brain. Human decision making is not just ‘fast and slow’. Humans are not just ‘predictably irrational’. Ethics is not just some black and white sifting process. Risk is a subjective process not a forensic process just like love, hope and faith are not something a computer can ever ‘feel’ nor something humans can ever measure.
Unfortunately for those seeking total control in safety, no amount of matrices, colours or mythical coloured calculators add one squat to how we make decisions about risk. If anything they simply endorse the absolute subjectivity of ALARP and due diligence. It is only the binary safety world that wishes that decision making was rational and mechanical, then it could get to zero.
Perhaps it would be good to leave the last comment to Bateson:
‘It seems that every important scientific advance provides tools which look to be just what the applied scientists and engineers had hoped for, and usually these gentry jump in without more ado. Their well-intentioned (but slightly greedy and slightly anxious) efforts usually do as much harm as good, serving at best to make conspicuous the next layer of problems, which must be understood before the applied scientists can be trusted not to do gross damage. Behind every scientific advance there is always a matrix, a mother lode of unknowns out of which the new partial answers have been chiseled. But the hungry, overpopulated, sick, ambitious, and competitive world will not wait, we are told, till more is known, but must rush in where angels fear to tread’. (Bateson and Bateson, 2005, ‘Angels Fear, Towards and epistemology of the Sacred’).
Do you have any thoughts? Please share them below