In the long line of LinkedIn lunacy in safety we have this: ‘AI Saves Lives’.
So now ‘AI-powered safety systems can analyse thousands of data points from cameras and sensors in real time’. And this comes complete with an AI generated graphic showing how AI can undertake surveillance on site, especially on critical issues such as PPE non-compliance.
This is what you get from an industry that has little idea of ethics or moral meaning and in its deontological crusade, thinks that the purpose of safety is to serve safety.
What else is in this propaganda? Ah yes, AI can reduce injury rates by 30% and identify unsafe behaviour early. And then this, ‘for safety engineers, this means faster decisions, better risk prevention, and ultimately safer workplaces’.
This is exactly how Safety concocts mythology. Get an idea, run it up the flagpole, create a myth and worship promised salvation. And, at no time question any ethical dimensions, moral concern or the nature of personhood. Then substantiate the myth with a semiotic that anchors the myth to the underlying soteriology of the narrative.
A couple of overlooked matters are worth mentioning:
- How is the integrity and autonomy of workers respected by the use of AI surveillance?
- How does faster decision making, make safer workplaces?
- How does a change in injury rate demonstrate safety?
- Why is PPE the psychosis of safety?
- Why is safety so easily seduced by fads and cons that have no connect ion to the ethical nature of personhood?
- Why is data and the ideology of measurement adored by Safety?
- Why is Safety so religious?
Of course, there are many more questionable assumptions in this typical LinkedIn lunacy, but don’t question safety or you must be anti-safety.
But don’t worry, when AI is god, salvation follows.
We know in the real world that AI makes mistakes, the latest in my jurisdiction being fines for seatbelt use (https://www.abc.net.au/news/2026-03-02/backlash-perth-ai-road-safety-cameras-speeding-seatbelt-fines/106405328). Unfortunately, AI has no capacity to ‘think’ or ‘interpret’ what it ‘sees’. AI is NOT the panacea for helping persons tackle risk. Surveillance of people (without permission or knowledge) is unethical. You can read more in our book on The Ethics of Risk (free download) (https://www.humandymensions.com/product/the-ethics-of-risk/). Start at Chapter 4: AI, Data and Ethics pp 84-97.
Unfortunately, you don’t do AI, AI does you (https://safetyrisk.net/you-dont-do-ai-ai-does-you-where-next-for-safety/).
All we learn in this story is that faith in AI has become a new cult in safety and like all cults, harms people.
Unless Safety anchors what it does to Ethics, it will never be professional.
Do you have any thoughts? Please share them below