There are other examples. Of course, the pundits were telling us 10 years ago that driverless cars would be all over the roads by now. They are not.
The propaganda about technology and AI is extensive. Don’t fall for it.
And this issue of responsibility is critical. Legal responsibility can only be attributed to something that is deemed a person by law. As Marshall states clearly:
‘To be deemed a person, in the eyes of the law, presupposes that there’s an independent set of personal interests, ability to exercise its own agency, and accountability for their conduct. AI has none of these characteristics because these aspects can’t be effectively digitized.’
AI can never be a person and can never have consciousness, emotions, Mind, feelings, morality or social meaning. And here is a huge problem for when AI gets it wrong.
In the eye of the law, when people are harmed, a person needs to be blamed (found responsible), a victim compensated and lessons learned.
Unless of course you are in the HOP school of dumb down nonsense, that thinks that the slogan ‘blame fixes nothing’ is a principle. I can’t wait for that one to get to court. ‘Yes, your honour, we had a principle that blame fixes nothing and so x is dead because we don’t like that nasty stuff called blame’.
We know already that AI makes sh&t up when it doesn’t know stuff (https://www.insurancethoughtleadership.com/six-things-commentary/when-ai-gets-it-laughably-wrong). We know AI creates disasters (https://www.livescience.com/technology/artificial-intelligence/32-times-artificial-intelligence-got-it-catastrophically-wrong). And, questions are already being asked about who is accountable when AI stuffs up (https://www.rand.org/pubs/articles/2024/when-ai-gets-it-wrong-will-it-be-held-legally-accountable.html). There are many examples of AI being the cause of death and harm.
The pundits of course continue to speak of AI as if it can become human but this is all propaganda nonsense peddled by vested interests with not a clue of ethics or moral philosophy. The same people regurgitate the common language of ‘machine learning’ with no expertise in the nature of learning. Isn’t it funny that the tech sector knows nothing about ethics and ruminates amongst themselves about things like privacy, responsibility and legal personhood and have no expertise in ethics. Sounds just like Safety.
Perhaps start by reading this: https://www.cell.com/patterns/pdf/S2666-3899(23)00245-3.pdf
Just imagine, AI kills someone and somehow a machine will be prosecuted as legally responsible??? You could make up something more stupid.
Now, just imagine in Safety (in love with Technique machines that go ‘bing’) when AI becomes a cause of harm and death? Imagine AI has been used in the task of writing a procedure and someone dies? What person is to blame? Well, it can’t be a computer or its software! It will be the PCBU who authorised the use of AI and allowed AI to be a source of knowledge. I can’t wait till Greg Smith gets out a blog on that one.
And just look at Safety get on the AI bandwagon as if there is no moral, legal or ethical problem with adopting AI:
- https://www.paloaltonetworks.com/precision-ai-security/secure-ai-by-design
- https://www.thesafestep.com.au/enhancing-workplace-safety-with-ai
- https://www.centreforwhs.nsw.gov.au/tools/ethical-use-of-artificial-intelligence-in-the-workplace-final-report
- https://safetyculture.com/topics/safety-management-system/ai-in-safety-management/
Roll up, roll up, buy your safety checklist from safetyculture offering promises of no improvement in culture with the only change being your bank balance.
Of course, it’s the same old organisations with not a clue in ethics (https://safetyrisk.net/ai-ethics-and-ai-safety/) and a thirst for money that push this stuff as if there’s no risk. And, its always those with not a clue in ethics that brand the way forward with AI and safety, as a piece of cake. More so, when the safety association (AIHS) has no clue about ethics (https://safetyrisk.net/the-aihs-bok-and-ethics-check-your-gut/) what hope has any safety company got.
When you publish rubbish in Safety like this (https://www.safetyandhealthmagazine.com/articles/25485-ais-role-in-workplace-safety) with no discussion on ethics, you know that such ignorance will be exploded quickly in a court.
So, here is some advice from SPoR about AI and safety:
- Don’t believe the hype about AI. It’s mostly propaganda and generated by vested interests.
- Remember that AI is not a person and never can be.
- Remember that in the end, a person will be held legally responsible by a court of law not a software program.
- Remember that safety and culture is NOT about data but about the moral treatment of persons.
- All information is interpreted. No data or information is neutral or objective.
- AI can never know the moral, ethical or emotional/social outcomes of any information.
- Don’t be lazy and expect a machine to do your Due Diligence (https://vimeo.com/showcase/4883640).
- When ever you are being sold something that is too good to be true, it’s too good to be true.
- Seek professional legal and ethical advice before adopting anything generated by AI.
- Don’t believe the ignorance and stupidity of HOP, blame fixes things.
Do you have any thoughts? Please share them below