It’s such a funny (peculiar) situation. As fallible people we develop routines, habits and heuristics in order to make life and living efficient and yet there is so little discussion in the safety world about the importance of understanding implicit knowledge. The worldviews of engineering and regulation so typically resourced in a review on safety, have next to no expertise in the area, hence the idea of routines, habits and heuristics (implicit knowledge) are never discussed or understood in the industry. The Brady Review is a classic example of this (https://safetyrisk.net/brady-review-nothing-new-no-way-forward/ ).
When we develop routines, habits and heuristics, they disappear from consciousness. That is the purpose of implicit knowledge, to help fallible humans be efficient – unconsciously. The purpose of routines, habits and heuristics is to stop thinking! If humans had to process everything cognitively on a minute by minute basis, we would get about one task done per day. When we make knowledge tacit/implicit (https://infed.org/mobi/michael-polanyi-and-tacit-knowledge/ ) we cease to know that our unconscious is ‘thinking’ for us in a certain way or why it is ‘thinking’ in such a way. Most of the day humans perform multiple tasks, complex tasks and intricate tasks with precision because of tacit knowledge. Everything goes well, when everything remains steady. Most often things go pear shaped when we act in routine, habit and heuristics and the context is changed, when turbulence in culture emerges.
Familiarity, insulates humans in tacit knowing from perceiving things. We don’t see anything or any activity as a neutral object, all knowing is ‘enculturated’ and learned. We learn from a very young age before we can talk or read text, how to interpret facial expressions and interpret contexts and act on them implicitly. By the age of five we know when the full gamut of emotions and feelings and how to respond to them instantly, in a fraction of a second. This is often the key to protection, safety and fight or flight activity. Everything goes well, when everything remains steady and when we act in routine, habit and heuristics and the context changes, when turbulence in culture emerges, things go pear-shaped.
How strange in safety, that by experience we develop everything to become implicit, then when something goes wrong we punish people for not thinking! This is what happens when you put an engineering and regulatory construct of cognition on humans. This is what a mechanistic anthropology creates, punishing and blaming people for the very human system we enact in implicit knowledge to be efficient. No wonder the sector has so little idea about ethical professional conduct! (https://safetyrisk.net/the-aihs-bok-and-ethics-check-your-gut/). If 95% of what all humans do is enacted by routine, habit and heuristics, why is implicit/tacit knowing a critical part of the WHS curriculum or the AIHS Body of Knowledge?
One of the most astounding assertions in the AIHS BoK on Ethics was the notion that safety people were innately ethical and that ethical decision making could be made by ‘checking your gut’. The absolute epitome of Deontological ethics.
So here we have Ethics, the very ‘soul of professionalism’ (BoK on Ethics p.1) hinging on implicit/tacit knowledge but we never learn about it in the BoK or the WHS curriculum.
Bernard Corden says
Rules and models destroy genius and art – William Hazlitt
bernardcorden says
Thanks to Dr Rob Long I find the works of Robert Sternberg, Guy Claxton, Jacques Ellul, Michael Polanyi, Thomas Kuhn, Paul Ricoeur and Mary Douglas far more fascinating and enlightening than anything by James Reason but you rarely find them referenced in safety publications, code of ethics or the WHS curriculum.
Rob Long says
Bernard, the idea that one views life through the discipline of safety is a major problem for the sector. The enclosed worldview and narrowness of research is problematic, hence an inability to engage in a transdisciplinary approach to anything. The latest BoK Chapter on Ethics is a classic example. It’s very much the case of epistemological resistance that is politicized so that learning doesn’t occur.
bernardcorden says
We know more than we can say – Michael Polanyi (ChOHSP)
bernardcorden says
I was recently re-watching Alan Bleasdale’s GBH and there is a fascinating confrontational scene between Michael Murray (an extreme socialist councillor played by Robert Lindsay) and Jim Nelson (a labour moderate and teacher at a special school played by Michael Palin)
It finishes with Jim Nelson admonishing Michael Murray and says……..”Don’t just read one book, read ’em all”
bernardcorden says
Please excuse my fallibility the above post should read…… “The brain does not make decisions, it merely hosts conversations”
bernardcorden says
The ethical decision making process in the AIHS BoK Ethics and Professional Practice (Figure 4, Page 49) could easily have been lifted from an Ikea instruction manual for assembling a bedside cabinet.
It is rather like a clunky mechanistic event tree logic diagram littered with binary and/or gates and the Skinner black box psychology disguises that the brain does make decisions, it merely hosts conversations (Guy Claxton). Life rarely resembles an assets and liabilities statement from an accounts ledger detailing inputs and outputs. It is far more messy, uncertain and complicated.
Rob Long says
Bernard, if you are going to invest in something like ‘check your gut’ as a decision making process and then state that safety people are innately ethical, you need much more than this AIHS BoK Chapter offers. More head in the sand safety.
Susan Zivcec says
Hello Dr Long. What an interesting starting point for a discussion! I wonder if you include human factors and the study of cognitive biases in the realm of “engineering anthropology”?
It’s certainly a worthy discussion… the factors which come to bear on the decision making, listening and decision making of both safety professionals and leaders in organisations. Perhaps I lack a deep education in this space, but I suspect you are starting to build a case against the likes of Reason, Weick ( the engineering antrhopologists). I have my own view on how these maps of meaning can be enhanced….. but I am not sure I understand yours. Is there somewhere you elaborate further on your views?
Rob Long says
Hi Susan, thanks for your questions.
I wouldn’t put the idea of anthropology in the same sense as engineering, both disciplines are most oppositional to each other. Neither would I put human factors in the same boat, most human factors stuff is surprisingly not about humans but rather a study of systems. As for cognitive biases they form a small part of understanding the human unconscious.
As for Reason yes, I don’t have much time for his work. Weick is very different as a social psychologist and I wouldn’t put either theories near each other. Karl could never be described as an engineering anthropologist.
I have written and presented extensively on the social psychology of risk including a number of free online downloads, resources, blogs, podcasts, videos etc. happy to discuss further.
Susan Zivcec says
Hmm.. I find it interesting that you don’t have time for Reason. In his book Human Error ( 1990) he discusses some of the same concerns you raise in your article. He studies cognitive underspecifications and error forms ( heuristics, cognitive biases) and error risk. What is it about his work as a pioneer in this space that you don’t like?
And yes.. I was being a little cheeky calling Weick and Reason “engineering anthropologists”… I was simply playing with words to describe the study of human beings interacting with each other in organisation and interfacing with machine. Let’s be honest… understanding heuristics and cognitive biases is hugely important in safety in design ( engineering) considerations.
So, having been in the space a long time, and having read the odd book…. I know your work. What fascinates me … is the loss of nuianced understanding of those who came before. I mean, some of this work that is supposed to be ” new” … is simply an extension of an old conversation.
Anyway, curious about why Reason’s book on Human Error (1990) Which discusses the cognitive underspecifications and error forms ( heuristics and cognitive biases) is not worth your time.
Rob Long says
Susan, I think yes, his work was foundational but there are other worldviews and ways of seeing risk of which he is unaware, also his work has set up some poor traditions/models that have been normalised that are not helpful. Perhaps the most unhelpful construct is the swiss cheese model of causality. Yes, heuristics are important but these are just a small consideration on the way to understanding the nature of the human unconscious and the collective unconscious. Similarly, the rationalist, cognitvist, positivist tradition is only one way of considering how people live in the world.
I have read Reason extensively and he has drained plenty of my time but unfortunately he frames his understanding of psychology through safety. Safety is NOT my lens of how I see the world.
It is interesting that people are satisfied with Reason’s descriptions of ‘human error’ and ‘error classifications’ and then that’s enough. There is so much more that requires understanding and most of his constructs about human error don’t explain much, particularly as there is so little discussion of fallibility, mortality, and vulnerability as a human disposition.
Similarly, I am concerned about how he defines ‘violations’ and how he anchors these to rationality and cognition. Similarly, his idea of resilience, not how I would define it.
Reason’s logic and focus is how we end up with such nonsense as ‘safety is a choice you make’ and ‘all accidents are preventable’. He defines violations as deliberative acts whereas the evidence shows that a very high number of things that go wrong do not involve choice or deliberation. Similarly, his idea that unsafe acts arise from ‘wayward mental processes’ puts the framework of decision making in a binary framework, and humans don’t make decisions in this kind of fast and slow way (neither do I find Kahnemann’s model helpful).
Reason’s constructs are a perfect set up for blame and simplistic rationality. I don’t understand anthropology in such a way as Reason indeed, many of the matters I consider critical in human decision making and social psychology are simply not discussed in any of his work.
I think you might know ‘of’ my work, but without relationship and engagement in what I do, your couldn’t really know my work. So much of what I do is enacted after significant change in worldview and in many ways unlearning the unquestioned norms of safety.
Unfortunately, there is no space in such a forum to explain much, so this may have to to.
Thanks for your enquiry.
Rob
Susan Zivcec says
hmm… perhaps you underestimate my perspective. Whilst I am a safety professional, I take an anthropological view on this… like yourself… and am not one to blindly accept the chant. Nay, why else would I be engaging here? I have read widely in other disciplines. I am interested in your take on Reason’s human variability paradigm… and as for your reference to thinking fast and slow… I don’t think Reason intends to oversimplify the decision process- he clearly recognises the complex interplay of cultural and unconscious drivers. It would be fair to say that these were not his area of focus. It would equally be fair to say that he was looking for models useful and applicable in organisations to improve safety outcomes. My perspective is rather broader- I am by nature fascinated with resilience. I am by history and experience fascinated by trauma and trauma recovery. So, I do have a certain deeper understanding of the workings of the mind-body and interpersonal spaces. Perhaps you could see my questioning as an invitation to engage.
Susan Zivcec says
OH and I would also add that there has been a great deal of misrepresentation of a range of key concepts from Reason.. including the Swiss Cheese Model.. which has somehow been ripped away from the rest of the Organisational Accident Model it was part of … so many over-simplifications … yes…. so much damage done
Rob long says
No space here to really discuss these things.
Rob Long says
Perhaps here is not the place for extensive apologetics and extended comment re worldviews and yes I see you have a consciousness for the mind-body problem. No, I am not underestimating your perspective, just responding to your questions and language. Unfortunately, the ‘chant’ and mantra of zero is indicative of its religious adherence in the sector. Unfortunately with little knowledge of theology, the sector has no idea how religious they have made safety. When Cardinal and life saving rules don’t work they then need a new mythic symbol to cope with their fixation on injury.
In Reason’s work The Human Contribution he comes to his closest in stepping away from the fixation on safety but still then works through a very clumsy apologetic for the swiss cheese. The text throughout remains binary and mechanistic. I understand what he tries to do but there is so much omitted from his discussion, next to no discussion of the unconscious or the body-mind problem or embodiment. Still very cognitvist, rationalist and positivist, its what the sector loves. His focus in his human variation paradigm remains binary (person or systems approach) and on a deficit view of humanity. He continues with the discourse of unsafe acts and violations thus showing his assumptions about human decision making. Indeed, so much of his language is pejorative in explaining the human unconscious that he must have taken a leaf out of some text by Freud and eaten it.
I think the reason why Safety so adores Reason so much is the attraction of simplicity and binary thinking. I find his language of heroics annoying and his idea of resilience, like most in safety is quite foreign to my understanding.
Anyway, if you want to exchange more maybe email might be better then here.
bernardcorden says
I was recently re-watching Alan Bleasdale’s GBH and there is a fascinating confrontational scene between Michael Murray (an extreme socialist councillor played by Robert Lindsay) and Jim Nelson (a labour moderate and teacher at a special school played by Michael Palin)
It finishes with Jim Nelson admonishing Michael Murray and says……..”Don’t just read one book, read ’em all”
bernardcorden says
Thanks to Dr Rob Long I find the works of Robert Sternberg, Guy Claxton, Jacques Ellul, Michael Polanyi, Thomas Kuhn, Paul Ricoeur and Mary Douglas far more fascinating and enlightening than anything by James Reason but you rarely find them referenced in safety publications, code of ethics or the WHS curriculum.
Rob Long says
Bernard, the idea that one views life through the discipline of safety is a major problem for the sector. The enclosed worldview and narrowness of research is problematic, hence an inability to engage in a transdisciplinary approach to anything. The latest BoK Chapter on Ethics is a classic example. It’s very much the case of epistemological resistance that is politicized so that learning doesn’t occur.
bernardcorden says
Please excuse my fallibility the above post should read…… “The brain does not make decisions, it merely hosts conversations”
bernardcorden says
The ethical decision making process in the AIHS BoK Ethics and Professional Practice (Figure 4, Page 49) could easily have been lifted from an Ikea instruction manual for assembling a bedside cabinet.
It is rather like a clunky mechanistic event tree logic diagram littered with binary and/or gates and the Skinner black box psychology disguises that the brain does make decisions, it merely hosts conversations (Guy Claxton). Life rarely resembles an assets and liabilities statement from an accounts ledger detailing inputs and outputs. It is far more messy, uncertain and complicated.
Rob Long says
Bernard, if you are going to invest in something like ‘check your gut’ as a decision making process and then state that safety people are innately ethical, you need much more than this AIHS BoK Chapter offers. More head in the sand safety.
bernardcorden says
We know more than we can say – Michael Polanyi (ChOHSP)
Susan Zivcec says
Hello Dr Long. What an interesting starting point for a discussion! I wonder if you include human factors and the study of cognitive biases in the realm of “engineering anthropology”?
It’s certainly a worthy discussion… the factors which come to bear on the decision making, listening and decision making of both safety professionals and leaders in organisations. Perhaps I lack a deep education in this space, but I suspect you are starting to build a case against the likes of Reason, Weick ( the engineering antrhopologists). I have my own view on how these maps of meaning can be enhanced….. but I am not sure I understand yours. Is there somewhere you elaborate further on your views?
Rob Long says
Hi Susan, thanks for your questions.
I wouldn’t put the idea of anthropology in the same sense as engineering, both disciplines are most oppositional to each other. Neither would I put human factors in the same boat, most human factors stuff is surprisingly not about humans but rather a study of systems. As for cognitive biases they form a small part of understanding the human unconscious.
As for Reason yes, I don’t have much time for his work. Weick is very different as a social psychologist and I wouldn’t put either theories near each other. Karl could never be described as an engineering anthropologist.
I have written and presented extensively on the social psychology of risk including a number of free online downloads, resources, blogs, podcasts, videos etc. happy to discuss further.
Susan Zivcec says
Hmm.. I find it interesting that you don’t have time for Reason. In his book Human Error ( 1990) he discusses some of the same concerns you raise in your article. He studies cognitive underspecifications and error forms ( heuristics, cognitive biases) and error risk. What is it about his work as a pioneer in this space that you don’t like?
And yes.. I was being a little cheeky calling Weick and Reason “engineering anthropologists”… I was simply playing with words to describe the study of human beings interacting with each other in organisation and interfacing with machine. Let’s be honest… understanding heuristics and cognitive biases is hugely important in safety in design ( engineering) considerations.
So, having been in the space a long time, and having read the odd book…. I know your work. What fascinates me … is the loss of nuianced understanding of those who came before. I mean, some of this work that is supposed to be ” new” … is simply an extension of an old conversation.
Anyway, curious about why Reason’s book on Human Error (1990) Which discusses the cognitive underspecifications and error forms ( heuristics and cognitive biases) is not worth your time.
Rob Long says
Susan, I think yes, his work was foundational but there are other worldviews and ways of seeing risk of which he is unaware, also his work has set up some poor traditions/models that have been normalised that are not helpful. Perhaps the most unhelpful construct is the swiss cheese model of causality. Yes, heuristics are important but these are just a small consideration on the way to understanding the nature of the human unconscious and the collective unconscious. Similarly, the rationalist, cognitvist, positivist tradition is only one way of considering how people live in the world.
I have read Reason extensively and he has drained plenty of my time but unfortunately he frames his understanding of psychology through safety. Safety is NOT my lens of how I see the world.
It is interesting that people are satisfied with Reason’s descriptions of ‘human error’ and ‘error classifications’ and then that’s enough. There is so much more that requires understanding and most of his constructs about human error don’t explain much, particularly as there is so little discussion of fallibility, mortality, and vulnerability as a human disposition.
Similarly, I am concerned about how he defines ‘violations’ and how he anchors these to rationality and cognition. Similarly, his idea of resilience, not how I would define it.
Reason’s logic and focus is how we end up with such nonsense as ‘safety is a choice you make’ and ‘all accidents are preventable’. He defines violations as deliberative acts whereas the evidence shows that a very high number of things that go wrong do not involve choice or deliberation. Similarly, his idea that unsafe acts arise from ‘wayward mental processes’ puts the framework of decision making in a binary framework, and humans don’t make decisions in this kind of fast and slow way (neither do I find Kahnemann’s model helpful).
Reason’s constructs are a perfect set up for blame and simplistic rationality. I don’t understand anthropology in such a way as Reason indeed, many of the matters I consider critical in human decision making and social psychology are simply not discussed in any of his work.
I think you might know ‘of’ my work, but without relationship and engagement in what I do, your couldn’t really know my work. So much of what I do is enacted after significant change in worldview and in many ways unlearning the unquestioned norms of safety.
Unfortunately, there is no space in such a forum to explain much, so this may have to to.
Thanks for your enquiry.
Rob