In this month’s issue of Open Road (the NRMA magazine) was a letter to the editor explaining the danger of ‘safety features’ on modern cars. In this case two features were highlighted as dangerous (I could name a dozen others).
The first feature in the letter was about the number of sensors and cameras about the car that give off various alarms whilst driving. On each occasion attention is drawn away from the road and often what sets off the alarm is neither important or risky.
The letter mentions occasions when making a quick manoeuvre in traffic and the alarms go off causing hesitation and, in both cases, caused a near miss. Hesitation at such times is exactly the wrong thing to do. As the letter writer states, ‘I was right and the car was wrong’.
The second so called ‘safety feature’ was the red triangles in the side mirrors that warn of a vehicle in the next lane. Then to see the vehicle attention is drawn to the mirror which if correctly positioned shows the vehicle anyway.
I find both of these examples amusing and typical of overzealous engineering, marketed as safety that has nothing to do with safety. Yet whenever the word ‘safety’ is invoked, you can just about sell anything. Most of it is much about the seduction of technology than critical thinking about tackling risk. Ah, the machine that goes bing! (https://www.youtube.com/watch?v=wshyX6Hw52I)
Similar marketing is made for wearable technologies in safety (https://safetyrisk.net/wearable-technology-and-safety/).
You can see the same marketing regarding driverless cars (https://www.investopedia.com/articles/investing/052014/how-googles-selfdriving-car-will-change-everything.asp). If you watch the dozens of TED Talks on driverless cars over the last 10 years you will see countless talks that are just spin, propaganda and marketing with about as much credibility as Nostradamus.
Of course, one spanner to put in the works is to ask an ethical or moral question about any technology (https://www.ted.com/talks/iyad_rahwan_what_moral_decisions_should_driverless_cars_make?language=en).
Machines and technology are neither objective, nor can they ‘learn’ and so when we relinquish power to a machine, we relinquish power to an engineer who designed a machine. And, the moral trade-offs and by-products made by a designer were probably never considered in the design process.
Learning and intuitive decision making require embodied knowing (https://safetyrisk.net/embodied-being-as-foundational-to-culture-and-risk/), something machines don’t have, neither are machines capable of ‘knowing’ what a moral decision is. Such decisions require an intuitive sense of Socialitie, and machines know neither of these.
In all of this marketing that tells us something is ‘good’ for us or ‘for our own safety’ where are the critical questions that challenge this spin? Where is the critical thinking that understands Technique (Ellul) and how it works?
For those who know Old Testament theology they know the story of the ‘Golden Calf’. You can read about it here: https://towardsdatascience.com/has-data-become-the-new-golden-calf-8a199e386642
What we see in this story and article is the power of seduction to stifle critical thinking. If it’s golden and attractive, what is its moral substance? It may be golden but is it ethically good for persons? Does it humanise the way persons tackle risk?
We see the same seductions made in safety about ‘big data’ and ‘prediction’ that are little more than clairvoyance, hopeful faith (https://safetyrisk.net/job-vacancy-safety-clairvoyant/ and religious promises. Just because someone tells you it’s a ‘safety feature’ doesn’t mean it’s a feature to help with safety. This is why critical thinking and ethics ought to be foundational in safety training.
And who knows, maybe there would be less selling of propaganda, cons and silver bullets in Safety that make no difference to tackling risk in the workplace, other than the risk of a diminishing bank account.
Perhaps the first step should be: if it sounds too good to be true, then it’s probably too good to be true.