Recently, Dr Craig Ashhurst, Dr Nippin Anand, Greg Smith and Dr Rob Long discussed the adoption of AI into the world of risk and safety. There are two videos here: Video One https://vimeo.com/1069071797 and here: Video Two https://vimeo.com/1069073815
Each one of these scholars considers some of the basics of AI in relation to their area of expertise. These being: the Law, Anthropology, Wickedity, Propaganda and Ethics.
The idea that AI is somehow neutral and objective is torn to shreds by the discussion that puts forward some of the profound concerns of this group.
None in the group are ‘Luddites’ nor anti-technology but raise some fascinating by-products and trade-offs to consider amongst all the hype of what AI can do for safety. Indeed, many of the claims made about AI in safety are based on fantasy and wishful thinking. Not reality.
Mythical Claims About AI
There are so many examples of Safety spruiking mythological claims about AI. Here is a simple example: (https://www.centreforwhs.nsw.gov.au/tools/ethical-use-of-artificial-intelligence-in-the-workplace-final-report), but there are hundreds. Any simple Critical Discourse Analysis (CDA) or any linguistic analysis shows just how much of this hype is hype. Read the article and ask some simple questions.
- Firstly, AI cannot ‘sense’ risk, neither can it ‘understand’ behaviours or ‘think’.
- Secondly, AI has no ‘sense’ of emotional pressures in work or any comprehension of hidden and unconscious factors that motivate and facilitate decision making. It cannot determine heuristics or sense psychosocial pressures that cannot be measured or observed. (All of the key drivers of risk and safety decision making cannot be measured).
- Thirdly, AI has no real-world experience nor lived experience in the workplace. All the data fed into AI about hazards and risks is about past events. In this way, AI is a machine that talks about the past, it can have no ‘imagination’ about the future.
Faith in AI Myth
Or, perhaps look at this typical example: (https://www.hse-network.com/ai-in-health-and-safety-how-artificial-intelligence-is-transforming-risk-assessment/).
- AI does not ‘revolutionize’ risk assessment and at best can only compile lists of what is fed into it from the past. AI cannot predict what it doesn’t know and has no ‘imagination’ with which to ‘explore’ risk potential.
- Just look at the language of this propaganda. The first line is a give-away: ‘The capabilities of AI do not stop to amaze as technology continues to evolve.’ Language such as ‘amaze’ and ‘evolve’ are faith statements based on attribution of the author. There is no evidence for either of these claims. AI has no ‘life’ of its own with which to ‘evolve’.
- Everything in this article is about AI-faith loaded with emotive language, promises and direct mis-information about the capabilities of AI. Eg. its use in the health sector has been a disaster and it is not instrumental in the sector.
- AI cannot predict hazards and it cannot replace inspectors. Unchecked AI has been demonstrated to be dangerous and harmful. In many ways, AI dumbs people down so they don’t need to think.
- Everything in this article is not news but rather propaganda. None of the claims are true neither supported by evidence. It is nothing more than a faith in AI ‘puff piece’. Look at the language such as ‘AI insight’, this is fairy-tale stuff.
Read any piece on what AI can do for risk and safety and you will see arguments based on the myth of brain-centrism. Look at all the semiotics associated with AI and safety, and it’s all brain-centric.
Of course, we know that AI cannot have emotions and cannot ‘feel’, two foundational essentials for learning. AI had no body and hence cannot ‘know’ what it is to be an embodied person. Moreso, the only way it can make a moral or ethical decision is by the parameters fed into it by a human.
Most of what circulates in safety about AI looks much like the same faith we see in religious beliefs. For example, the all-seeing eye of AI
Rather than just being a sucker for safety propaganda about AI, why not do some actual research on AI. The following are just a start:
Atkinson, R., and Moschella, D., (2024) Technology Fears and Scapegoats. 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy. Palgrave Macmillan. Cham, Switzerland.
Banafa, A., (2024) Introduction to Artificial Intelligence. Routledge. New York.
Borovick, H., (2024). AI and the Law, A Practical Guide to Using Artificial Intelligence Safely. Apress. New York.
Buolamwini, J., (2023) Unmasking AI, My Mission to Protect What is Human in a World of Machines. Random House. New York.
Boylan, M., and Teays, W., (eds.) (2022) Ethics in the AI, Technology, and Information Age. Rowman and Littlefield. New York.
Coeckelbergh, M., (2020) AI Ethics. MIT Press. Cambridge Mass.
Dubber, M., Pasquale, F., and Das, S., (2020) The Oxford Handbook of Ethics and AI. Oxford Uni Press. London.
Ellul, J., (1964) The Technological Society. Vintage Books. New York. (https://monoskop.org/File:Ellul_Jacques_The_Technological_Society.pdf)
Hasselbalch, G., (2021) Data Ethics and Power. A Human Approach to Big Data and AI Era. Edward Elgar Publishing. Cheltenham.
Hendrycks, D., (2025) Introduction to AI Safety, Ethics, and Society. CRC Press. Boca Raton. FL.
Madsbjerg, C., (2017) Sensemaking, The Power of the Humanities in the Age of the Algorithm. Hachette Books. New York.
Postman, N., (1992) Technopoly, The Surrender of Culture to Technology. Vintage Books. New York. (https://interesi.wordpress.com/wp-content/uploads/2017/10/technopoly.pdf)
Powell, J., and Kleiner, A., (2023) The AI Dilemma, 7 Principles for Responsible Technology. Berrett-Koehler Publishers. Oakland CA.
Richterich, A., (2018) The Big Data Agenda, Data Ethics and Critical Data Studies. University of Westminster Press. London.
Santoni de Sio, F., (2024) Human Freedom in the Age of AI. Routledge. New York.
Vallor, S., 2024. The AI Mirror, How to Claim Out Humanity In an Age of Machine Thinking. Oxford Uni Press, London.
More on this coming out in the SPoR Newsletter this week.
Rob Long says
Machines don’t ‘learn’. Perhaps you might like to predict where the stock market will be in a month and put money to back it up? Perhaps you might like to predict Donald Trump’s decisions tomorrow, especially when he doesn’t know what they are. The meaning of the word ‘predict’ and ‘prediction’ is about certainty in telling the future. An estimate or guess is not a prediction, modelling is not prediction nor give some statistical probability is prediction. I have no interest in ‘blanket skepticism’ and the videos you commented on make that clear. What I am interested in is, if people can get away from the mythology of AI and faith in ‘Technique’ and actually engage with others in the realities of engagement and relationships. AI never will, nor can ever can, connect with humans as a humans do.
Rob Long says
Any denial of mortality, randomness and fallibility is delusion. Any claim to prediction of the future is fraudulence.
James Henderson says
Denying the possibility of useful prediction is like saying weather forecasts are fraudulent because they are sometimes wrong.
The statement “Any claim to prediction of the future is fraudulence” is an absolute claim, and, therefore, false. Scientific disciplines and machine learning use models to predict future outcomes within quantified margins of error.
You seem equate prediction with absolute certainty, which is not how machine learning prediction works. Bayesian models, machine learning algorithms, and statistical inference do not deny randomness—they incorporate it.
My comment never denied randomness or fallibility. Machine learning models often explicitly account for uncertainty through distributions, Bayesian inference, and confidence intervals.
You arguing against a strawman—no legitimate machine learning scientist or statistician claims absolute foresight or omniscience. But that does not mean they cannot model future possibilities with statistical rigor.
If you want your argument to be persuasive, you need to move beyond blanket skepticism and engage with the nuances of statistical inference and probability.
Andy says
Bayesian models update prior beliefs with new evidence, but that depends on a meaningful understanding of the starting conditions. How do I statistically establish that with another person: their experience, perception, worldview? I can’t inhabit their embodied being. I can’t live their history. And how would I extend that across a work group of twenty, with all the messy, shifting social dynamics constantly in play?
A wrong weather forecast cancels my BBQ: a disappointment.
A wrong risk prediction in a workplace, one that falls outside the quantified margin of error? That emerges very differently.
Maybe the problem isn’t the statistics.
Maybe it’s that we’re attempting to use mathematical tools to answer fundamentally human questions about engaging with risk.
James Henderson says
Your argument raises an interesting point about the complexity of human experience, and I agree that human factors—like perception, worldview, and social dynamics—pose challenges for any form of risk assessment.
However, the challenge you describe is not exclusive to Machine Learning or Bayesian inference—it applies to any quantitative risk assessment, or qualitative for that matter.
All risk models are fallible to some degree, whether purely human designed, or augmented by machine learning. And the key is to understand the limitations of the respective model(s).
To your point about industrial safety: AI excels at identifying slow-burn risks that might go unnoticed when viewed only through human perception or traditional methods. For instance, in estimating ergonomic risks and the factors contributing to musculoskeletal disorders, the perception of ergonomic activity can be subjective and difficult to quantify on a large scale. So, people often rely on historical statistics, like lost-time injury frequency rates. However, these statistics are typically lagging indicators.
The role of the AI should not be to replace human judgement, but to augment it. Helping to build a more complete picture of risk, at scale. Comprehending the extent and scale risk exposure so that humans can take preventative action So Far As Is Reasonably Practicable.
Rob Long says
Andy, I love Kierkegaard who stated that life can only be understood backwards but only lived forwards. and this is what we do when we feed endless data into AI. Of course AI doesn’t know what life is and never will because it is not alive and not embodied. and, without emotions it can never ‘learn’. It certainly can’t ‘live’ forwards.
When we look at all the language surrounding AI hype about prediction, its always conditional. In that way, whatever the model, projection or estimation, it’s never a prediction but rather having a bet each way. Reminds me so much of religious conditions placed on prayer. Always nice to claim some projection about god, that when it doesn’t eventuate it was a lack of faith. We see this in the language of prediction in AI that sounds so much like religious language of faith.
James Henderson says
This claim that “AI can’t predict” is absurd and demonstrates a fundamental misunderstanding of the field. Every equation is a prediction, including something as simple as gravity and Newton’s equations of motion.
Has the speaker ever heard of Bayesian statistics, conditional probabilities, or estimation of distribution algorithms? Sure, these technologies can’t, and won’t, know what will make the evening news tomorrow, they can’t fortune tell, and they can’t estimate black swan events. And no-one reputable building the technology is implying that they can, even when using the word “predict”. But in a bounded dynamical system, with known deductive or inductive relationships, these systems absolutely can predict (even with epistemological uncertainty estimates) inferentially valid state estimates. It is absolutely reasonable to use the word “predict”.
This speaker is likely conflating “prediction” with some kind of mystical clairvoyance, overloading the word “predict” with fortune-tell. It’s a classic case of interdisciplinary miscommunication. A scientist hears “prediction” and thinks of Bayesian inference and dynamical systems; a humanities professor might hear “prediction” and think of Nostradamus.