Great article from New Scientist first published here and that may touch a few nerves for the “all accidents are preventable” and “Safety is No 1 Priority” Crusaders. For many it feels good to apportion blame, it confirms our biases and takes the heat off any possibility that management performance or our systems were less than satisfactory. The myth that human error is the major cause of “accidents” is one of the many myths and misconceptions that hold back the progress of safety and stifles learning.
We should accept that accidents will happen
It’s getting ever easier to pin the blame for every accident on somebody, somewhere. We should resist that urge
NAPOLEON, one of the greatest forward thinkers the world has ever known, said there was no such thing as an accident, only a failure to recognise the hand of fate. But while he might have lived by that maxim, society doesn’t have much time for it now.
Consider traffic accidents, the commonest of potentially serious mishaps. Nowadays, they are often euphemistically branded as “incidents”, and copious research is under way to identify factors implicated in a crash, from car colour (silver is said to be safest) to phone usage and even parasites that can alter drivers’ behaviour.
Our responses have been just as disparate, ranging from revised laws to redesigned dashboards. (No one has proposed mandatory screening for parasites – yet.) Minor, and deeply human, errors of judgement are often at the root of catastrophic failures (see “7 mind slips that cause catastrophe – and how we can avoid them“), so we are increasingly using automation – self-driving cars, for example – to take our error-prone selves out of the loop.
That won’t stop the blame game. Someone, somewhere, can always be blamed: if not the users of automated systems, then their manufacturers, programmers or those who maintain the networks they often rely on. Increasingly omnipresent sensors allow for minutely detailed assessments of responsibility. Left to the lawyers and insurers, there might soon be no blame-free “accidents”at all.
This is unfamiliar territory. Existing laws cover some of the issues that arise (14 September 2013, page 40), but we can expect some perplexing cases to come before the courts. As they do, we should remember that pointing the finger isn’t always productive: it can lead to defensiveness that stymies change, and hamper attempts to improve safety.
This has been recognised by the law for more than a century. In 1884, Prussian Chancellor Otto von Bismarck – an improbable reformer – introduced “no-fault” settlements, allowing workers to be compensated for often novel industrial injuries without having to demonstrate their employers’ negligence. No-fault is still being built into law today: Scotland is contemplating it for medical negligence claims.
The trouble is that no-fault goes against our social instinct to seek out causes and allocate blame. This has generally served us well. Without it, we would live in a much more dangerous world than we do. But in chasing down blame, we should recall that a propensity for error is the flipside of the capacity to take risks. And risk-taking is a vital component of any conception of progress.
“In chasing down blame, we should recall that error is the flipside of taking risks and thus part of progress”
Much of the time, humans are driven by goals other than safety, which is added as an afterthought or at best a counterweight. When the balance shifts too far, derision of intrusive “nanny states” or overweening “health and safety” regimes is the inevitable result.
So despite what technocrats might hope, we won’t ever wipe out accidents. “It will become next to impossible to contract disease germs or get hurt in the city,” Nikola Tesla predicted in 1915. He was wrong. The risks he knew were simply replaced by new ones.
To err is human, to forgive divine. Our secular society may have no more time for divinity than for the Napoleonic hand of fate, and recklessness should of course be penalised. But we shouldn’t punish every trace of blame just because we can. As machines take over from humans, we must strike a balance between learning from their errors and prosecuting the humans who make and run them. That won’t happen by accident.
Read the article from New Scientist first published here