What you do, is sample your voice, data, videos, messages and voice notes, creating “digital doubles” of ourselves and, other input into AI and it can simulate a conversation from the grave. Of course, none of this is real. We have already reached what Baudrillard called ‘hyper-reality’. The reality one experiences online is not reality in the flesh.
And, all of the hype, propaganda and mythology that circulates in the AI space only thrives because a lack of applied critical thinking. Unfortunately, something Safety knows little of (https://safetyrisk.net/critical-thinking-a-checklist-for-safety/).
The real problem with AI is not what it can do FOR you but rather what it does TO you when you surrender to the myths and propaganda.
This is when the fear of critical thinking common to Safety comes undone. You mustn’t be negative or critical of Safety. Safety is sacred after all, I am safety.
Just because something can be done doesn’t mean it should be done.
This discussion about deathbots reveals a deep problem for Safety when it comes to thinking about AI. Not only is Safety bankrupt in critical thinking but also has little expertise in Ethics. We discussed this in our most recent book The Ethics of Risk, A Transdisciplinary-Semiotic Lens.
In the book, we discuss popular myths in safety, hidden in slogans like ‘do the right thing’, ‘blame fixes nothing’ and ‘do what’s right, even when no-one is watching’.
Of course, such slogans (not principles) hide the philosophy of Deontology that dominate how Safety understands ethics. And how does Safety know what is ‘right’? This is where Deontology, at its roots in Natural Law ethics (Kantian Ethics) come undone. In the end, such an ethic ends up anchored to duty to the Act and Regulation as if these were developed on the basis of some
‘Natural’ law. This why Safety doesn’t know what to do with the subjectivities in the Regulation about ALARP and Due Diligence.
The reality is, the subjectivities built into the Regulation demand skill in critical thinking and ethics.
At the heart of this story about ‘deathbots’ is more denial of fallibility and the myths of Transhumanism. We see exactly the same denial of fallibility in the ideology of Zero (https://www.humandymensions.com/product/zero-the-great-safety-delusion/).
As we saw at the last safety World Congress in Sydney, every regulator and a host of zero sycophants, sponsored Zero (https://safetyrisk.net/the-sponsors-of-zero-are/). Afterall, as we know from the Safety Science Innovation Lab, Zero is a moral goal (https://safetyrisk.net/zero-is-an-immoral-goal/). Didn’t you know, the best thinking in ethics is delivered by safety engineers!
At the foundation of zero ideology is the same delusion as is evidenced by AI deathbots. Similarly, believed on the myth of the brain-as-computer metaphor (https://safetyrisk.net/the-brain-as-computer-myth/). There is no research to support this myth neither, any research that can tell us what consciousness is. This is despite the fact that Elon Musk talks about the transference of consciousness as if it is a product (https://mashable.com/article/alien-earth-consciousness-robots).
If we ever needed critical thinking and ethical thinking in safety it is now.
Fortunately, at the end of this story about deathbots we have this:
AI can help preserve stories and voices, but it cannot replicate the living complexity of a person or a relationship. The “synthetic afterlives” we encountered are compelling precisely because they fail. They remind us that memory is relational, contextual and not programmable.
Our study suggests that while you can talk to the dead with AI, what you hear back reveals more about the technologies and platforms that profit from memory – and about ourselves – than about the ghosts they claim we can talk to.
Meanwhile, where reality resides in the real world, what Safety needs more than ever are skills in engagement, observation, listening, care, ethical conduct and helping. Yet, it seems, that Safety spends all of its efforts NOT seeking to tackle the critical importance of personhood, as we saw recently with the focus on safety as managing energies (https://safetyrisk.net/energy-focus-safety/ ). We see exactly the same delusion in understanding psychosocial risk as a ‘hazard’.
If you are interested in learning how to think critically and ethically you can email admin@spor.com.au for how to access coaching or, our next online courses.
Do you have any thoughts? Please share them below