Theme – The Delusions of AI, Risk and Safety
‘Congratulations, we have just shifted all out Psychosocial data on you, into our hazards register’.
Can you just imagine this message being forwarded to you any time soon as Safety takes on the task of implementing ISO 43005 and the Codes of Practice associated with ‘controlling’ Psychosocial ‘hazards’.
The recent move by Safety into the field of Psychosocial Health with ‘ISO 43005 Occupational health and safety management – Psychological health and safety at work – Guidelines for managing psychosocial risks’ is fraught with so many problems, not least of which is the ethical management of data.
Other concerning aspects of this new standard and associated Codes of Practice have been discussed here:
For the purpose of this Newsletter my focus is on challenges associated with the Data Ethics of Power and Big Data Sociotechnical Systems (BDSS).
In an industry that is yet to tackle an ‘ethic of risk’ or anything of any maturity about Ethics at all, this venture into Psychosocial health and the recording of Psychosocial issues as ‘hazards’ poses huge questions about: confidentiality, trust, confession, openness, honesty and power.
In all I have read in ISO 43005, Codes of Practice on Psychosocial health and anything on Ethics (eg. the AIHS BoK Chapter on Ethics (https://www.ohsbok.org.au/chapter-38-3-ethics-and-professional-practice/), there is simply no discussion on the nature of power. Yet, the very essence of Psychosocial stress is the abuse of power.
Similarly, there is no discussion on the politics of power or abuse of power in any Safety publication that raises the subject of ‘duty of care’ or moral obligation. Most often the notion of moral obligation or ‘duty of care’, is focused on the Act and Regulation NOT the person abused or violated. The focus on a ‘duty of care’ is on legal obligation not an ethic of care or caring.
So, in the absence of an Ethic of Risk or any thought of a ‘Data Ethics of Power’ What is going to happen with the data collected on Psychosocial health as ‘hazards’ in the safety industry?
I will not be able to raise the many issues associated with this question in this Newsletter but if you want to read further about the problem then here is a good source to commence your introduction to the problem of Data Ethics of Power.
Hasselbalch, G., (2021) Data Ethics of Power, A Human Approach in the Big Data and AI Era. Edward Elgar. Cheltenham UK.
Recently in Australia we have witnessed the vicious abuse of power inflicted on vulnerable persons by the Robodebt scheme, enacted by the previous conservative government (https://robodebt.royalcommission.gov.au/).
The Robodebt scheme highlights the problem of the Data Ethics of Power and Big Data Sociotechnical Systems (BDSS) in the abuse of persons. Robodebt was concocted by the conservative government to target the poor and vulnerable who were supposedly ‘ripping off’ the welfare payment system in Australia. Robodebt is symbolically the enactment of conservative mythology anger against the poor.
At the heart of the Robodebt system was the creation of an algorithm using Big Data systems to victimise welfare recipients. It is estimated that this system resulted in and least 2000 or more suicides (https://www.abc.net.au/triplej/programs/hack/2030-people-have-died-after-receiving-centrelink-robodebt-notice/10821272). When it comes to ideology, the last thing that matters is safety!
In June 2021, a Federal Court Judge approved a settlement worth at least A$1.8 billion for people wrongly pursued by the conservative Government’s Robodebt scheme.
Early on in the development of the Robodebt scheme by the conservative government, advice was given on its illegality and this was ignored by Ministers and Senior Public servants (https://www.afr.com/politics/federal/tudge-never-queried-legality-of-robo-debt-commission-hears-20230131-p5ch0s; https://the-riotact.com/minister-vowed-to-double-down-on-robodedt-even-when-told-it-was-illegal-royal-commission-hears/639458).
At the heart of the Robodebt scheme was conservative neoliberal ideology that created a Big Data Sociotechnical System (BDSS) that enabled the unethical use of power. Once such a system is put into play, Big Data becomes an automated vehicle that enacts the vices of the ideology. This enabled conservative politicians to wash their hands of any ethical responsibility and the computer systems are allowed to take over.
We have seen similar abuses of data systems in Cambridge Analytica Scandal (https://en.wikipedia.org/wiki/Cambridge_Analytica), the Snowden Affair (https://www.tandfonline.com/doi/full/10.1080/23753234.2020.1713017), the COMPAS Algorithm (https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/) and countless other examples of the abuse of power using Big Data Sociotechnical Systems (BDSS).
Hasselbalch (p.87) calls these ‘Destiny Machines’ to highlight the way BDSS shapes the destiny and misery of persons.
I just watched the miniseries The Capture that proposes clandestine work by spy agencies using ‘deep fake’ technology in real time. However, whilst it’s one thing to enjoy the imaginations of film and script writers, it is quite another to enter into the Qanon world of attributing conspiracy to such scripts as covert reality, then witness the enactment of a terrorist attack (https://www.theguardian.com/australia-news/2023/feb/16/wieambilla-shootings-australia-christian-terrorist-attack-queensland-police) based on ideology and misinformation.
We know that films The Capture, The Matrix, The Black Box Society, Black Mirror and The Social Dilemma capture the Mind of popular culture and offers solace to the religious imagination (Ostwalt, (2012) Secular Steeples and Lyden, (2003) Film as Religion). Much of this movie genre helps generate the myths of human brain-as-computer and the absurd ideas of Transhumanism (https://www.telecomreview.com/articles/reports-and-coverage/3925-digital-humanism-the-extent-of-our-hyper-digital-reality). Transhumanism is a faith-cult just as the language of the mechanisation of humans is pure nonsense. Similarly, the mythology of ‘machine learning’ all help fabricate nonsense mythologies about AI. It was Baudrillard who aptly commented: ‘The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence’.
I wrote about the problem of brain-centrism and so called ‘machine learning’ in my books Envisioning Risk (pp.14-17) and Tackling Risk (pp. 24-28). The top 10 movies of all time are focused on Metaphysical themes, similarly with miniseries. Stranger Things is a good example. All of this helps feed into the creation of belief about AI and shapes how people define personhood.
However, now we see what follows in the wake of mythology about AI is a faith-belief about reality. We see this similarly with the current fixation with AI and computer programs like Chat GPT.
We have all been warned about the trajectory of this mythical thinking by Madsbjerg (2017) in Sensemaking, What makes Human Intelligence in the Age of the Algorithm (https://www.blinkist.com/en/books/sensemaking-en) and, Larson (2021) The Myth of Artificial Intelligence, Why computers Can’t Think The Way We Do.
It is at this point a small dose of reality is helpful (https://www.spiceworks.com/tech/artificial-intelligence/articles/common-myths-about-ai/).
All this excitement about AI currently is like a cultic illness. The absurd levels of mythology that exists about the capability of AI, the exaggeration of AI capability and associated delusions should be ringing ethical alarm bells for all of us. Moreso, when one considers that the safety Industry now wants to record Psychosocial data as ‘hazards.
What hope has Safety got for critical thinking when so many believe that The Matrix is real?
What hope has Safety got of behaving ethically and professionally with the data of persons as ‘Psychosocial hazards’ when it has no interest in Ethics? What hope can there be for the ethical use of data when the Safety has such a love affair with Big Data, engineering and so called ‘predictive analytics’? A few examples tells the story:
Just imagine this faith-lust for ‘control’ (the darling focus of Safety) coupled with this new found desire to ‘control’ Psychosocial hazards? Again, with no discussion on a Data Ethics of Power or the unethical abuse of Big Data Sociotechnical Systems (BDSS)?
Just the use of the language of ‘hazards’ with regard to Psychosocial health in itself, poses enormous problems for the use of data and the expected suppression of any confession of abuse in the workplace, especially now as such data then invokes the inclusion in the Regulator in reporting!
So, we already have the AIHS and others focusing on the nonsense language of ‘futureproofing’ (https://www.aihs.org.au/events/nsw-safefest-future-proofing-our-profession; https://www.linkedin.com/pulse/future-proofing-safety-health-phil-walton) complete with the pooling of ignorance by amateurs projecting about professionalism. Keen ((2008) tells us all about this in The Cult of The Amateur. And one can be sure that the words ‘surveillance’ and ‘ethics’ are nowhere to be found.
Without an Ethic of Risk safety will never be professional. Without an Ethic of Personhood, Safety will never stop its dehumanisation of persons in the name of safety, zero and ‘duty of care’.
Just imagine the noise of all these meaningless ‘speak up’ campaigns (https://www.safework.nsw.gov.au/search?query=Speak+Up) when workers become informed that any Psychosocial information they give, becomes a registered ’hazard’? Just imagine linking the ‘Speak Up app’ to the Psychosocial hazards register with no ethic to guide what happens next? Just imagine what happens to the data without a Data Ethic of Power?
Without a Data Ethics of Power one can be sure that the new venture by Safety into Psychosocial ‘hazards’ will create a new data nightmare (https://safetyrisk.net/welcome-to-the-nightmare-safety-creates-its-own-minefield-as-usual/).
Of course, it doesn’t have to be this way. In SPoR the study of Ethics is foundational to the enactment of risk. Indeed, the workshop on an Ethic of Risk is currently being conducted for free and is over subscribed (https://safetyrisk.net/free-online-workshops/). One can find more on the Workshop here: https://cllr.com.au/product/an-ethic-of-risk-unit-17/
The workshop enables a positive, practical and comprehensive approach to risk as if persons and ethics matter.
|Dr Long’s Workshops in Vienna June 2023
This will most likely be the last visit by Dr Long to Europe. Those who wish to meet and learn with Dr Long (and Matt Thorne from Risk Diversity https://www.riskdiversity.com.au/) you can start to make plans for this series of Workshops to be held in 26-30 June 2023. You can get detail and book your place here:
The two workshops on offer (and free semiotic walk) are:
· Advanced Semiotics and
· Advanced iCue Method
The venue in Vienna is:
We already have 12 registrations so, looks like developing into a wonderful event.
· Semiotics Introduction: https://vimeo.com/135437986
· iCue Overview: https://vimeo.com/777948243
· iCue Demo Part 2: https://vimeo.com/782728022
· Advanced iCue: https://vimeo.com/783101851
You can also see other examples of iCue here:
This is a great opportunity to meet people practicing SPoR methods in their organisations and to meet and understand what SPoR is about (https://safetyrisk.net/what-is-spor/).
The workshop outline and registration is here: https://www.humandymensions.com/vienna-workshops/
|SPoR Community Network by Matt Thorne
After long discussions with Rob Long, Craig Ashhurst and others in the Centre for Leadership and Learning in Risk (CLLR) we have made the decision to develop a connected approach to communications for the Social Psychology of Risk by creating the SPoR Community Network. I (Matt Thorne Adelaide) have offered to be the conduit and manage this.
Recently, there has been much happening in SPoR across the world from Brazil, Austria, Norway, UK, Canada, Australia and New Zealand. Some of the ideas and methods being developed are fantastic and worth sharing. For example, our friends in Brazil are currently learning how to include Dr Ashhurst ‘Niche Wicked Problems Model’ in an integrated way in SEEK Investigations with Advanced iCue Engagement. The results are amazing. Dr Nippin Anand is also doing great things in iCue Workshops across Europe, Brian Darlington is continually innovating new things in Mondi and again developing practical approaches worth sharing. Rob and Nippin are doing things together in India and some wonderful innovations are being done here in Australia in SPoR.
I am a High School drop-out and have learned much through the school of ‘hard knocks’. I have learned through SPoR how important it is to read but I am certainly no academic. I have held several careers throughout my life and now find myself consulting in risk and safety in Adelaide, South Australia.
I discovered SPoR like many of you through this blog and through attending a free workshop. I have never looked back and have benefitted greatly from the support and sharing of many people who I have met through SPoR.
I discovered early that SPoR is NOT about academic things but rather about what works in tackling risk as if persons matter. One important attribute is a willingness to learn. SPoR is not the place to come and tell all you know about engineering or safety.
So, whilst there will be some sharing of what some highly credentialed people may be doing, it is also a Community Network where you will learn what High School drop-outs like me are doing to humanise risk in the workplace. You will get to hear what people are doing in Brazil, New Zealand, Austria, South Africa, Canada and many other places and across many industries in SPoR and sharing ideas and practical methods that you can use.
Part of the reason for the creation of this community network is to share but also ensure the continuance of the great work started by Dr Long 20 years ago. Rob also wishes for SPoR to be the focus not himself as he drifts slowly into retirement. So, some key questions to consider:
How can we best learn from each other and support each other in SPoR? How will the Body of Knowledge of SPoR better penetrate the Risk and Safety world? How will we ask questions of each other to keep consistency in approach? How will we maintain the integrity of IP and methodology and methods as SPoR grows?
These things can only continue to grow with integrity (and ethical professional practice) if we hold to community values and continue the conversations.
· Latest research
· SPoR sessions being held UK, Europe, Australia, Brazil and North America
· Training in SPoR Tools people are not unaware
· Observations from fellow proponents of SPoR who are implementing SPoR tools in situ
· Free online sessions/workshops advanced notice
· Book Reviews
· Online resources such as podcasts, training modules
· Mentoring and help for issues and problems where issues can be canvassed to a forum of understanding SPoR associates.
What do you need to do to join the SPoR Community Network?
Simply let me know that you want to join the SPoR Community Network from your favourite email address and we will put you on the list. Post to me:
Once we have begun the list I will put out a newsletter on a regular basis to let you know of SPoR happenings as listen above.
BTW, this will be an open group and transparency is important. If there are any sensitivities about the company or organization you work for then, please let me know and we can keep such things confidential for you.
Alternatively, it is great to know what companies and organisations are actually practicing SPoR and how things are going.
|Testing the Cognitive Abilities of the Artificial Intelligence Language Model GPT-3
Researchers at the Max Planck Institute for Biological Cybernetics in Tubingen have examined the general intelligences of the language model GPT-3.
Using psychological tests, they studied competencies such as causal reasoning and deliberation, and compared the results with the abilities of humans.
Their findings paint a heterogeneous picture: while GPT-3 can keep up with humans in some areas, it falls behind in others, probably due to a lack of interaction with the real world.
We know a great deal about what Chat GPT can do, but very little is being discussed about what it can’t do and will never do.
We know that GPT-3 can be prompted to formulate various texts, having been programmed for this task by being fed large amounts of data from the internet. It can write articles and stories that are (almost) indistinguishable from human-made texts, and can also master challenges such as math problems or programming tasks.
However, we know that human language is embodied that is, we learn speech, gesture, text, writing, symbols, signs, sounds, hearing, cultural nuance, metaphor, communicating and interconnected embodied emotions in an integrated and embodied way. Human beings are intracorporeal, language is not learned as just a brain computer-like function. Extensive research by Fuchs and others shows how language is learned and enacted:
Of course, because computers will never be embodied, have emotions, have fallibility (and human error), have an unconscious, dream or experience intercorporeality, they will never know a human language model of meaning, interaffectivity or intercorporeal reality (https://www.academia.edu/30974462/Intercorporeality_and_Interaffectivity). Similarly, a computer will never know the Psychosocial stressors associated with risk such as abuse, depression, anxiety, fear, loss, care, love, trust etc.
It was Chomsky destroyed the silly behaviourist theory of language acquisition in the 1970s (https://www.jstor.org/stable/27758883). And since then, with advanced research in neuroscience, we know so much more about the embodied nature of human being.
The behaviourist brain-centric model of understanding humans is hopelessly flawed. Indeed, dangerous.
The mythology about AI possessing ‘human-like’ cognitive qualities is of course nonsense. A computer can never experience the world as a human person does. All a computer does is store data and text as data. It cannot interact emotionally with the world and so cannot ‘learn’ through all the integrated and intercorporeal means as listed above. All it can do is ‘in and out’, a wonderful metaphor for a behaviourist understanding of the world.
Of course, it didn’t take long for the researchers at the Max Planck Institute for Biological Cybernetics in Tubingen to discover that ‘actively interacting with the world is be crucial for matching the full complexity of human cognition’.
Similarly, a computer cannot act independently in a moral or ethical manner, this can only be programmed in by a biased human and hence the computer uses the morality that is programmed into it. There is no such thing as a neutral and objective morality by computer. Again, more mythology developed by faith-belief in nonsense ideas such as ‘machine learning’. The repetition of algorithms and algorithms to repeat new algorithms is NOT learning. There is no learning without movement and risk.
|Over 500,000 Downloads
As of this newsletter we have just tipped over 500,000 downloads of books, videos and other give aways. You probably are receiving this newsletter because you have availed yourself of this free material. Or maybe you have joined one of the free workshops. Whatever the reason, we try to ensure that cost is no impediment to learning about SPoR.
At SPoR, we seek to stand out and offer an alternative to mainstream traditional safety (S1&2) and, to the marketing of snake oil and the commercialisation of safety. We know there are no silver bullets in risk and safety, so we don’t charge as if silver bullets are real.
|Infants Outperform AI in ‘Commonsense Psychology’
It is no surprise then that infants easily outperform AI in simple perceptions such as understanding what motivates others. Understanding, perception and emotional connection are essential for social meaning.
Just consider this for a few minutes that, young infants without any speech, text or written ability know how to communicate to others through gesture, emotion, intuition and feeling years before then need text or speech. They learn in an embodied way to quickly recognise from inference what motivates others ands themselves in the world. AI simply cannot make such inferences and never will, as it has no emotions, body or unconscious with which to be a socially related fallible being.
Machine cannot draw on the same abilities as an infant to ‘know’ and ‘learn’.
If you really want to know about the power of human intuition read Gilles Deleuze wonderful work Bergsonism (https://monoskop.org/images/2/2d/Deleuze_Gilles_Bergsonism.pdf).
Recent research at NYU’s Center for Data Science and Department of Psychology demonstrates that AI cannot have social intelligence as an infant does. AS the researchers state: ‘AI lacks is flexibility in recognizing different contexts and situations that guide human behaviour’.
To develop a foundational understanding of the differences between humans’ and AI’s abilities, the researchers conducted a series of experiments with 11-month-old infants and compared their responses to those yielded by state-of-the-art learning-driven neural-network models.
To do so, they deployed the previously established “Baby Intuitions Benchmark” (BIB)—six tasks probing commonsense psychology. BIB was designed to allow for testing both infant and machine intelligence, allowing for a comparison of performance between infants and machines and, significantly, providing an empirical foundation for building human-like AI.
Infants demonstrate such predictions through their longer looking to such events that violate their predictions—a common and decades-old measurement for gauging the nature of infants’ knowledge.
Adopting this “surprise paradigm” to study machine intelligence allows for direct comparisons between an algorithm’s quantitative measure of surprise and a well-established human psychological measure of surprise—infants’ looking time.
The models showed no such evidence of understanding the motivations underlying such actions, revealing that they are missing key foundational principles of ‘commonsense psychology’ that infants possess.
|New Video Series on Psychosocial Health with Craig Ashhurst, Greg Smith, Dr Long and Others
If you have not yet subscribed to the Centre for Leadership and Learning in Risk (CLLR) Vimeo site (https://vimeo.com/cllr) you may not know of the recent series of discussions about Psychosocial Health.
The CLLR site has many other videos, series and free workshop modules on: Introduction to SPoR and the Due Diligence workshop with Greg Smith and Dr Long. Other free modules will be released in 2023.
It is easy to register as a follower on the CLLR site and videos are updated with new videos added regularly.
One of the delusions of faith in AI and big data is the supposed humanisation of machines based on closed understanding of what it is to be human. We see this is the following research Robot Helps Students With Learning Disabilities Stay Focused (https://neurosciencenews.com/qt-social-robot-learning-22539/) based on an extremely limited understanding of cognition.
Of course, any analysis of this article demonstrates the use of language to disguise assumptions about machines eg. ‘social robot’. Socialitie relies on the capability for embodied relational knowing, something a machine robot cannot do. Without genuine emotions and feelings a robot cannot socialise. This is why the notion of A in AI is ‘artificial’, it is not real.
What is most problematic with all this faith in AI is that it diminishes the importance of human embodied engagement. Sure, a robot can issue text as language but it cannot ‘sense’ emotional need or ‘touch’ the other. Let’s look at the opening statement:
‘Engineering researchers at the University of Waterloo are successfully using a robot to help keep children with learning disabilities focused on their work’.
The first question is the source, engineers! The second question concerns the notion of ‘success’. The third question concerns the notion of ‘helping’ and finally we need to question what it means to be ‘focused on work’. None of this language comes in a neutral or objective manner but masquerades as such.
Apart from novelty, why would one take away from the critical need for human relationships with disabled persons?
The article repeats the use of the myth of ‘social robots’ just like engineers maintain the myth of ‘machine learning’. Of course, it is for those with no expertise in education or learning to use such language as if the regurgitation of data is learning, when it is not. The article even calls the robot ‘humanoid’.
The research project is premised on a behaviourist and engineering assumption of cognition and a mechanistic notion of learning. Parrot learning is not learning.
The real problem with this research is, it doesn’t understand that unless learning is ‘embodied learning’ it is just schooling. All learning involves movement, way beyond the recall of data or turning humans into the image of robots.
Unfortunately too many allow the fiction of iRobot (https://www.imdb.com/title/tt0343818/) and similar movies dictate their thinking about human reality (https://link.springer.com/article/10.1007/s00146-021-01299-6). Perhaps a reading of Baudrillard on Simulation and Hyper-reality might be useful: https://www.mlsu.ac.in/econtents/2289_hyper%20reality%20boudrilard.pdf; https://research.sabanciuniv.edu/id/eprint/42705/1/10337686.B%C3%BCy%C3%BCkko%C3%A7_S%C3%BCtl%C3%BCo%C4%9Flu_%C3%87a%C4%9Fla.pdf
|What is the Message for Risk and Safety about AI and Big Data?
There is so much in risk and safety that supposes that the regurgitation of data is knowing/learning. This is common in inductions, checklists and didactic methods of training. Then when something goes wrong the worker is castigated because they have not ‘learned’ the data. Similarly, teaching content without meaning and purpose is also meaningless and not learning. Most of the time there is no condieration of non-conscious knowing or thinking eg. heuristics.
This brain-centric approach to cognition is based an undiscosed nethodology that seeks to define human persons as machines and Mind as computing.
None of this approach to persons or learning shows any appreciation of persons as social emotional relational beings. None of this demonstrates an understanding of the psychology of perception, motivation of goals.
None of this approach understands the importance of ‘being’ in the world. Hedonic experience is a biological and ecological of living homeostatically in the environment. The driving energy of human self-regulation in relationship (homeostasis) to other humans and everything in the world (Allostasis) is about embodied being, not computing. This is why we cry, love music, dance, sing and get ‘filled with joy’ when we ‘see’ someone we ‘love’. It is why we show ‘pleasure’ with the ‘taste’ intimacy. It’s how we understand the gestural metaphor that communicates to the heart and gut.
Enter your description
One of the prophets of the 1960s was the French Philosopher Jacques Ellul. By the word ‘prophet’ I don’t mean fortune teller but rather someone who could see the trajectory of something well before others could. One ‘foretells’ the future the other ‘forthtells’ the future.
Ellul published his work The Technological Society in 1964, well before we knew just how much the computer would redefine how we think of humans.
Ellul defines Technique as the ultimate quest for efficiency and attributes a power and energy to Technique as an Archetype because it has a life of its own, beyond the components of it.
What we see in the faith of engineers in AI was presented by Ellul 60 years ago. What we see in the need for a Data Ethic of Power was documented by Ellul 60 years ago. Rather than being liberated as humans to ethical relationship with others we now see co-dependence on Technique that dehumanises persons.
In Risk and Safety we see absurd levels of paperwork (read Papersafe by Greg Smith) that don’t work, endless checklisting (that doesn’t work) and an inability to engage others in the basics of conversation, engagement and listening. Even those who use words like conversation just mean ‘telling’.
In the love and faith in Technique some have forgotten that technology serves us, we don’t serve technology. Similarly, we don’t serve systems, systems serve us.
You can download Ellu’s book and some of his other works here: https://monoskop.org/Jacques_Ellul
|Find the Cat Competition
|The prize for this competition is a base set of iCue magnets (presuming you know how to use them).
First 5 correct answers will get a set of base set magnets posted to them. Simply post your entry to firstname.lastname@example.org and include your snail mail address. First 5 correct entries win the prize.
If you don’t know about SPoR iCue Engagement look here:
For those not familiar with the work of Dr Nippin Anand you may want to check out his wonderful site and work in SPoR:
You may be pleased to know that Nippin and Rob are doing work together in India.
|Workshops With Brazil and Love of Zero in Portuguese
|You may not know that a group of Brazilians in risk and safety have translated Dr Long’s book For the Love of Zero into Portuguese. You can download the Portuguese version here: https://mailchi.mp/c780a1e2b697/por-amor-ao-zero
You may also be interested in the group in Brazil who are currently studying extensively with Dr Long and currently doing a module on Wicked Problems, Semiotics and Zero. You can see these workshops here: vimeo.com/user/57711103/folder/14908052
Soon the group will be moving to the Advanced iCue workshop. If you are in Brazil and have an interest please contact: email@example.com
|Forward Notice Registration for Free Workshop in Holistic Ergonomics
Dr Long is pleased to announce that the Workshop Holistic Ergonomics will be offered for free in May 2023. The program will commence on Tuesday 9am Canberra time on 9 May and continue every Tuesday for 5 weeks.
You can find out more about Holistic Ergonomics here: https://cllr.com.au/product/holistic-ergonomics-unit-6/
In SPoR we understand that persons need to be addressed holistically and in a Transdisciplinary method. In SPoR we do not exclude ways of understanding persons nor do we countenance the behaviourist/engineering bubble of safety that treats people as objects and hazards.
This workshop program provides practical positive methods for engaging others in a holistic way as if persons matter in risk.
You can register your interest in this program by emailing to: firstname.lastname@example.org Places are strictly limited to 50 participants. Do not take the place of someone who wants to learn simply because the program is free. Similarly, do not register for the program if you are unwilling to unlearn safety and relearn SPoR.
The Law and Due Diligence
Contacts and Websites