TL;DR
- This is not about “no blame” or “blame fixes nothing”.
- Language influences the way we and others think.
- When we use communicate there is what we say (the words), and there is what that means (the discourse).
- I believe the discourse of “human error” in safety actually ends up reinforcing blame and creating over-fixing/correcting.
- I think that shifting our language to sensemaking and fallibility could open up a more positive and constructive way of tackling how we think about humans and safety, and how we learn.
- In the end, we cannot prevent human error (so we should stop trying), but we can seek to manage human fallibility (which we kind of do already), so let’s shift our language to match reality, and therefore better match what people think and experience.
The Setup
It feels like people and organisations have been aware of the potential traps around the use of “human error” for a while. The obvious one (from my experience) being the risks associated with blaming the person without also looking for more systemic or organisational contributors. I suspect that people can sense this on an intuitive level, but we’ve also seen it through the emergence of “no blame” and “just” culture approaches, as well as in the HOP principle, “Blame fixes nothing”.
However, I think there is a second issue that’s contributing to some of the problems we’re encountering in safety, but it’s a little more subtle. It’s related to how I think leaders believe they are meant to respond to errors and mistakes. I suspect a myth has developed where leaders (and by proxy their organisations) think that they have to eliminate and prevent all error (I suspect it’s related to the also mythical belief that organisations think are meant to prevent all harm). I think that this in turn drives a lot of the over-systemising and over-correcting (over-intervention) that we see in safety. That is, the implementation of numerous small, low effectiveness (typically admin) controls that do little to manage actual risk, and collectively often end up increasing risk.
Not Talking About “No Blame”
To be clear, I’m not advocating a “no blame” or “blame fixes nothing” approach. I believe that “no blame” will end up creating its own set of net negative trade-offs over time. We do need to be able to hold people accountable. We need to be able to say, you didn’t do this well or as agreed, and then learn from that. What I’m talking about here is the nuanced trade-offs that come with how we talk about humans and fallibility.
The intent of this article is to talk more about the context and alternatives to “human error”, rather than getting into the details of the negatives. But to make sure we’re on the same page, the main negatives that I see are that blaming individuals negatively impacts on learning (through its impact on reporting), and trying to eliminate or prevent all errors leads to over-intervening and over-systemising.
As alluded to, awareness of the first issue is pretty broad, whereas I don’t see the second one being talked about as much. Either way I think we’re still struggling with these issues in safety. I see this directly in strategies that actively try to not just blame the person, and I also see it when I talk to groups about examples of where safety strategies can end up inadvertently increasing risk (they always point to too many rules, procedures, forms etc as an example). So, given we’ve been aware of it for so long, why can’t we seem to break away from it? It seems that simply being aware of the trap may not be enough to avoid it.
Unpacking What’s Driving the Problem
One of the things we can look at when focussing on culture and people is language. I’m sure we all intuitively understand that language influences people, and there is plenty of research showing its influence (although you should be careful not to believe every social psychology experiment from the 60s and 70s, they were doing some crazy things back then).
So, listening to the language used within an organisation can help identify cultural influences (underlying shared beliefs and assumptions). This is about listening to the specific language, and then inferring the discourse (or the meaning) that is the being created. This is sometimes referred to as discourse analysis.
Going further down this line than we could ask, what does the language of “human error” mean to people? To be clear, I’m not asking you for a definition, I’m asking you to consider the discourse. What it means to people when they hear it, or what people mean when they use it?
What Do People Make “Human Error” Mean?
Consider these two questions:
- From a front-line worker’s perspective- What do they think is about to happen when they hear “human error” as a causal factor from an investigation?
- From a leader’s perspective- When they heard about human error leading to an accident, how do they think they are meant to respond?
Now, I’m going to generalise, and I accept that some of you will believe that this doesn’t happen in your organisation. But I’m going to put it out there that when workers hear “human error” they believe someone is about to be blamed (potentially them). I also think that when leaders hear it, they think their job is to try and prevent it happening again. That is, I think a belief had developed where leaders feel like the organisation will be held responsible for any errors.
Here’s some more questions to consider to think about how discourse (inferred meaning) can influence people:
- Who do front line people think is about to get in trouble when they hear “human error”?
- Do they think the response is going to be kind and understanding, or is someone about to get disciplined and fired (I’m being intentionally provocative here, but I would still ask you to consider how you think your front-line staff may think about this, not how you think about it)?
- Does a finding of “human error” result in leaner, more flexible and useable systems and processes, or heavier, more complex systems?
- Does a “human error” finding ever result in people being forced to repeat meaningless training or complete additional meaningless processes or checks?
- Is there a belief or assumption among managers and leaders that something always has to happen in relation to every incident, that there has to be some corrective action or fix?
- Is there an unspoken, but also unquestioned, assumption that we should be trying to eliminate human error (even though we know it’s impossible)?
Where I am unsubtly heading here is that every time the term “human error” is used it’s creating tension between the way the organisation is using it (it’s just a causal factor), and the way people experience it (they expect us to be perfect but we’re never going to eliminate human error). It’s damaging culture through double speak, and it’s leading to actual outcomes that often end up increasing risk (over-systemising).
What’s an alternative?
Just to reset, I’m not advocating absolving people from accountability. What I’m putting forward is the idea that if the language (discourse) of “human error” leads to a number of negative trade-offs, then changing that language could both avoid or off-set these effects, and potentially instead end up have positive effect in relation to how leaders, organisations and systems interact with fallible humans.
That is, rather than trying to reframe or change what “human error” means to everyone, why not use something else instead? To this end, what I suggest is talking about sensemaking and fallibility.
Here’s what it could look like against the earlier raised issues:
- Given humans are not perfect, but leaders may be caught in a cycle of thinking they have to prevent errors, organisations could instead focus on managing human fallibility.
- Given the problems that “human error” findings during investigations cause, organisations could instead focus on identifying first how people where sensemaking (how the person was making sense of what they were doing, and what was happening around them), and then if needed they can use fallibility as a way of explaining errors.
The main advantage of using sensemaking is that it is a very human centric focus (versus the system or environment). It lays the foundation for learning first, to help offset the natural drive to see causes first. In this way it reflects HOP ideas of trying I learn about “work as done” versus “work as imagined”.
The main advantage of using “fallibility” is the different discourse of meaning it invokes. Go back to the earlier questions about expectations around error. Do organisations think they have to prevent all errors? Do leaders expect people to be perfect even though we know they are not? Put simply, if you say out loud, “Humans are fallible!”, not one disagrees. It’s quite neutral language, where the language of “human error” is not.
Final Words
Can we do things in the name of “safety” that can unintentionally and up increasing risk?
I think we can, and I think the way leaders and organisations have come to think about humans and “error” is an example of it, and I think how that flows into our investigations and corrective action processes amplifies it.
We know that language is important, and so given we can use language to understand and learn about what is happening now, we can also change language to shift what people think and what is happening (which is one of the way cultural change happens because it shifts shared beliefs and assumptions).
Shifting organisational language is never a fix. But in this case it is an example of understanding how our existing language may not be supporting the outcomes we want (and may even be producing the opposite), and how choosing more human centred language may end up being beneficial for both the people and the organisation.

Do you have any thoughts? Please share them below