When an industry is bogged down in engineering, behaviourism and zero, it is unlikely there will be any vision. Nothing is more crippling to vision that the toxicity of zero vision, the global safety mantra.
We see this lack of vision in the recent release of the so-called ‘Australian Work Health and Safety (WHS) Strategy 2023–2033’. When Safety gets into its mono-disciplinary cocoon and only consults in its own narrow band of conformance, there can be no vision.
Just because the language of ‘ambitious vision’ is used doesn’t mean there is any ambitious vision. Just more safety code (https://safetyrisk.net/deciphering-safety-code/ ) for more of the same.
The language in the opening page is a sure give-away: ‘regulatory framework’, ‘measuring against targets’ and ‘performance’. The framework for this so-called ‘vision’ is all contextualised in a discourse on injury rates. The same old tired myth that injury rates are a measure of the presence of safety.
There is no mention in this document on a moral, ethical or person-centric humanising discourse. Indeed, there is no mention at all of perhaps the greatest threat to safety in the future, particularly associated with so-called ‘Psychosocial hazards’, the brutalising of persons through AI data.
I have already had many safety advisors contact me after their organisations have ask them to record Psychosocial ‘hazards’ on a ‘hazard register’.
Here we have an industry now venturing into the complexities of Psychosocial and mental health with no idea what they are doing. Can you just imagine what data will now be collected under this new found focus? Can you just imagine what will happen to the data collection of these ‘hazards’ and how this ‘data’ might be used? And under what expertise will a safety advisor decide something is a Psychosocial and mental health ‘hazard’? A 3 day course?
Of course, the SWA strategy recognises the emerging challenges of AI but NOT the greatest risk associated with AI – data ethics. The strategy states this:
‘Rise of artificial intelligence (AI), automation and related technologies
New technology capabilities can bring many benefits, including safer work and workplaces. But they need to be designed and have appropriate oversight to ensure workers are not exposed to new or additional WHS risks.
For example, while automation could replace some dangerous manual tasks (decreasing worker exposure to physical risks), workers overseeing the technology could be exposed to more psychosocial hazards resulting from increased or more complex interpersonal interactions as part of their job role.’
Here we have an industry aware of the power of AI with no mention at all of the greatest threat/harm AI poses in safety and that is, unethical use and abuse of power through Psychosocial data. All this paragraph acknowledges is ‘many benefits’. The naivety is breathtaking.
You know of course that no one involved in this so called ‘strategy’ has read Hasselbach, (2021) Data Ethics of Power, A Human Approach in the Big Data and AI Era. Neither is there any discussion in the AIHS BoK Chapter on Ethics on the nature of power. Similarly, no discourse in any of the Codes of Practice on Psychosocial and Mental Health on power.
What a marvellous set up for the abuse of power.
And with all the data collection in the future to be made in ‘Psychosocial hazards’, the mind boggles with what such an incompetent industry will do.
I know, let’s write a book on the Invasion of Ukraine but don’t mention the war.
There is no such thing as neutral data, objective data or AI that can make a moral decision. Hasselbach (2021) comments:
‘When a society defines a specific group of people or community of people as a ‘problem’ to be solved, for example, data technology and systems will be designed to ‘target’ this problem in a sophisticated manner, as we allow it to be’.
Now that Safety has entered into the work of Psychosocial health (hazards), those who are ‘hazards’ will be named and defined by the hazard register. I can’t think of a better strategy to ensure that no-one will speak up about Psychosocial need.
Can you just imagine where this will go in an industry that has absolutely no interest in ethics? Can you just imagine how unethical this will be with an industry consumed with simplistic thinking about data and AI? Moreso, look at what already happens in an industry addicted to data collection, counting and the justification of any action in the name of ‘safety’. There is nothing more dangerous than an organisation with a passion and obsession for safety, used to justify any unethical practice.
Good old Safety brutalising persons in the name of good.
There is simply no cognizance at all in this so-called SWA ‘strategy’ about how Psychosocial data may be used to abuse workers. This comes from an industry that is yet to develop a mature approach to an ethic of risk.
Ah, yes, and this is the document of ‘ambitious vision’
As we get to the end of the document (p.13), we see the 5 actions to achieve the strategy:
01: Information gathering (whose information?)
02: National Coordination (of data?)
03: Data and Intelligence Gathering (on who?)
04: Health and safety leadership (of course it won‘t be)
05: Compliance and enforcement (the old safety favourites)
The language is so instructive.
BTW, there is no such thing as neutral information. Neither, is there information or data that is used objectively. Of course, the ‘National Coordination’ of ‘Data and Intelligence’ (that is not intelligent) can only match the design and bias of the system. And when people are defined as ‘hazards’ we all know where this will lead. Anything that emerges out of such naivety will NOT be leadership and we know this is qualified by the language that follows – Compliance and Enforcement. Whoo!
Get out those hazard registers and enforce those hazards who are non-compliant.
This is all scary stuff.
Like all strategic statements, we forget about what the last strategic statement was about. We forget its measures and accept that strategies don’t work and rarely achieve lofty aspirational goals and the noise of ‘motherhood’ statements.
Of course, there is nothing in this so-called ‘strategy’ that even entertains the need for curriculum reform. This is perhaps the most urgent need for an industry that loves to use the word ‘professional’ but has no ethic with which to enact professionally. So, another 10 years of more of the same. More counting, more data collection and more brutalism.
If you do wish to learn in a positive, progressive, practical and empowering approach to risk then there is a SPoR curriculum (https://cllr.com.au/register-to-study/) that can take you somewhere that tackles the problems of unethical practice and naivety about professional ethics.
Do you have any thoughts? Please share them below