In this month’s issue of Open Road (the NRMA magazine) was a letter to the editor explaining the danger of ‘safety features’ on modern cars. In this case two features were highlighted as dangerous (I could name a dozen others).
The first feature in the letter was about the number of sensors and cameras about the car that give off various alarms whilst driving. On each occasion attention is drawn away from the road and often what sets off the alarm is neither important or risky.
The letter mentions occasions when making a quick manoeuvre in traffic and the alarms go off causing hesitation and, in both cases, caused a near miss. Hesitation at such times is exactly the wrong thing to do. As the letter writer states, ‘I was right and the car was wrong’.
The second so called ‘safety feature’ was the red triangles in the side mirrors that warn of a vehicle in the next lane. Then to see the vehicle attention is drawn to the mirror which if correctly positioned shows the vehicle anyway.
I find both of these examples amusing and typical of overzealous engineering, marketed as safety that has nothing to do with safety. Yet whenever the word ‘safety’ is invoked, you can just about sell anything. Most of it is much about the seduction of technology than critical thinking about tackling risk. Ah, the machine that goes bing! (https://www.youtube.com/watch?v=wshyX6Hw52I)
Similar marketing is made for wearable technologies in safety (https://safetyrisk.net/wearable-technology-and-safety/).
You can see the same marketing regarding driverless cars (https://www.investopedia.com/articles/investing/052014/how-googles-selfdriving-car-will-change-everything.asp). If you watch the dozens of TED Talks on driverless cars over the last 10 years you will see countless talks that are just spin, propaganda and marketing with about as much credibility as Nostradamus.
Of course, one spanner to put in the works is to ask an ethical or moral question about any technology (https://www.ted.com/talks/iyad_rahwan_what_moral_decisions_should_driverless_cars_make?language=en).
Machines and technology are neither objective, nor can they ‘learn’ and so when we relinquish power to a machine, we relinquish power to an engineer who designed a machine. And, the moral trade-offs and by-products made by a designer were probably never considered in the design process.
Learning and intuitive decision making require embodied knowing (https://safetyrisk.net/embodied-being-as-foundational-to-culture-and-risk/), something machines don’t have, neither are machines capable of ‘knowing’ what a moral decision is. Such decisions require an intuitive sense of Socialitie, and machines know neither of these.
In all of this marketing that tells us something is ‘good’ for us or ‘for our own safety’ where are the critical questions that challenge this spin? Where is the critical thinking that understands Technique (Ellul) and how it works?
For those who know Old Testament theology they know the story of the ‘Golden Calf’. You can read about it here: https://towardsdatascience.com/has-data-become-the-new-golden-calf-8a199e386642
What we see in this story and article is the power of seduction to stifle critical thinking. If it’s golden and attractive, what is its moral substance? It may be golden but is it ethically good for persons? Does it humanise the way persons tackle risk?
We see the same seductions made in safety about ‘big data’ and ‘prediction’ that are little more than clairvoyance, hopeful faith (https://safetyrisk.net/job-vacancy-safety-clairvoyant/ and religious promises. Just because someone tells you it’s a ‘safety feature’ doesn’t mean it’s a feature to help with safety. This is why critical thinking and ethics ought to be foundational in safety training.
And who knows, maybe there would be less selling of propaganda, cons and silver bullets in Safety that make no difference to tackling risk in the workplace, other than the risk of a diminishing bank account.
Perhaps the first step should be: if it sounds too good to be true, then it’s probably too good to be true.
Travis Stephens says
On a regional drive recently (long straight road, 110km/h etc.) I relaxed with the cruise control on. A swarm of bugs triggered the anti collision and not only took me a moment to become aware of the scenario, but a few moments more to work out what had triggered and how to deal with it because it wasn’t something I’d expect with nothing in front of me, I obviously haven’t developed any heuristics for that particular function in the context of the scenario.
Pity the machine can’t learn to differentiate an obstacle and an insect…
Rob Long says
No machine will ever develop a soul, emotions, a gut, spirit, intuition or embodied learning. That’s why its called ‘artificial’ intelligence. Similarly, no machine ‘learns’ like a human ever can. Without these a machine can only adapt according to programming, algorithms and computing, it can never have a ‘hunch’ about something or follow its best guess or choose a moral option. Life and being (what computers can’t have) is subjective and all semiotics are interpreted so its sensors will never replicate human heuristics and engineers who create most of this crap, have no idea of human judgment and decision making.
Peter Collins says
I recall many years ago watching a documentary about a dangerous stretch of road in Spain and the outcome of the research suggested that the more unsafe you made the cars the safer the outcomes. If the car felt unsafe and the steering in responsive, drivers slowed down and drove to the conditions. The faster and less connected the driver is the more dangerous the result.
Rob Long says
Yes Peter. That’s Risk Homeostasis.