I read with fascination the development (not innovation) of the Raptor AI. This development can:
‘Additionally, the Raptor AI is able to collect data and store information including facial recognition, car registration numbers and wireless device detection to continue ensuring the safety of all.’
No thanks! And, who is going to ensure that the data collected, facial recognition and anything else this thing can do, will be ethical??? Perhaps start by reading Hasselbalch (https://dataethics.eu/wp-content/uploads/DataEthics-UK-original.pdf; https://www.e-elgar.com/shop/gbp/data-ethics-of-power-9781802203103.html).
But if you’re in safety, make sure the last thing you discuss is power. Which of course, receives no mention in the AIHS BoK Chapter on Ethics! How convenient.
There can be no such thing as innovation or safety unless it is ethical.
Any discourse of ‘innovate’, ‘create’ and ‘evolve’ must be held in tension with the underlying (and in safety unspoken) methodology (worldview) that drives it. We see with the latest HOP conference in Woolongong that any discourse on ethics and power is decoupled from such a discussion on innovation.
Brutalism is best energised through silences and a lack of transparency (https://safetyrisk.net/safety-culture-silences/).
There is no such thing as a neutral or objective worldview.
Any innovation or created safety development will always carry its own affordance (designed by-products and energy-in-design). My questions regarding any innovation are always:
- Who designed this?
- What is their moral philosophy?
- Who has the power?
- What are the by-products?
None of these questions are ever on the agenda for safety.
A good example of a how this works is with the development of illuminated handrails’ (https://australianbollards.com.au/blogs/blog/tagged/artificial-intelligence). Again, what makes this innovative? This is just more traditional safety with flashing lights! But safety loves handrails indeed, I know companies with cameras I stairwells that if you are pictured not holding the hand rail you get sacked! Surveillance apparently is good for you. The only trouble is, it doesn’t work (https://safetyrisk.net/surveillance-doesnt-work/).
Innovation in safety is always marketed as ‘this control of you, is good for you’.
Of course, throw in the language of AI and everyone is seduced by this stuff. Artificial Intelligence is ‘artificial’, not embodied and cannot ‘understand’ persons, e-motion or human ’being’. All these objects (computers) are designed and programmed by engineers who have as much sense of Anthropology and Ethics as a brick.
If you want to understand culture, persons and risk, the place to start is not engineering.
Just because something can be done, doesn’t mean it is ethical or should be done.
And you won’t get much help from Safety, that thinks a duty to safety is primary above a duty to persons (AIHS BoK Chapter on Ethics).
Behind all this discourse of innovate, create and evolve is the threat of Technique (Ellul https://ia803209.us.archive.org/2/items/JacquesEllulTheTechnologicalSociety/Jacques%20Ellul%20-%20The%20Technological%20Society.pdf). Technique is the energy that drives an ideology of efficiency. We see this in the discourse of zero (https://www.humandymensions.com/product/zero-the-great-safety-delusion/). Of course, Safety doesn’t even know what Technique is. You will find Ellul nowhere in any safety curriculum across the globe.
When you set goals of perfection for fallible persons (https://www.humandymensions.com/product/fallibility-risk-living-uncertainty/) the trade-off is always brutalism.
There is no more unethical goal in safety than zero!
Most of the de-humanising stuff on the market is all promoted in the spin of ‘safety’. Yet, here we have people in so called ‘safety differently’ camp unable to manage the immoral nature of this goal (https://safetyrisk.net/zero-is-an-immoral-goal/).
The foundation of professionalism is ethics (https://safetyrisk.net/data-cannot-drive-professionalism/), which is why safety has a long way to go before it can claim the concept.
If however, you do want to understand ethics and professionalism in risk, you can study here: https://cllr.com.au/product/an-ethic-of-risk-workshop-unit-17-elearning/
In SPoR, we don’t de-couple ethics from any practice in tackling risk. This is because it matters what you do to people in your quest for safety. It matters if you prioritize objects above persons. It matters if you cause psychosocial harm on your pathway to zero harm. It matters if your innovation brutalises persons.
Chiara says
I recently had to do a first aid refresher and do the theory component online watching videos and doing assessments. All the videos were AI generated. It took me a moment to work out why the person’s lips / mouth didn’t quite match what they were saying and why their hand gestures were repeated every 10 seconds. Frankly, it was just plain creepy…
Rob Long says
Chiara, the ideology of Technique is discussed nowhere in safety, which enables endless brutalism of persons in the name of innovation. AI isn’t a solution, it is a problem. Meanwhile, the safety industry still has no idea on what it requires to be professional.
Matt Thorne says
Great blog Rob, when organisations and schools of thought do not understand their own Ontology, ethics and morals can take a back seat to efficiency, and damn the consequences
Rob Long says
Matt, very few in safety consider ethics as anything worth thinking about and it is certainly in no safety curriculum. This is what enables engineers to adore zero and claim to know how to ‘unpack it’ with out any competence in moral philosophy. Any apologist for zero is either incompetent or unethical because, nothing good can come out of perfectionism for fallible people. It is simply astounding that the safety differently group don’t know what to do with it.