It is important not to confuse the idea of Technique with technology, they are not the same thing. Whilst technology can be used in the quest for Technique, Technique is a disposition towards life and living, not about the objects we use to achieve ultimate efficiency. Technique is an ideology that believes that any efficiency is good. In this sense Technique is a moral and ethical energy that places efficiency above everything, including the welfare of persons. Technique often appears in the workplace as a motive for good yet, had by-products of dehumanising people. This often comes in the language ‘human factors’ and ‘resilience engineering’ which is not about humans but rather the efficiency of systems.
Postman called Technique ‘Technopoly’ (1992) (https://interesi.files.wordpress.com/2017/10/technopoly.pdf) and demonstrated how the ideology of Technique has a life of its own that demonises persons. Stupar (https://www.researchgate.net/publication/47658458_Living_in_the_technopolis_Between_reality_and_imagination) called the effects of Technique as if living in a ‘Technopolis’.
In much of this discussion about Technique we see exhibited a great deal of hope and faith placed in technology as if it is a neutral object. No object is neutral but carries an ‘affordance’ in its design. An affordance is a designed energy in a product that creates a use by design. Indeed, the design of objects carry the philosopher of the designer in design. Affordances (https://monoskop.org/images/c/c6/Gibson_James_J_1977_1979_The_Theory_of_Affordances.pdf) and design carry intent, use and an offer to be used according to the design.
An affordance carries with it a design to be used in a certain way and this also creates an ease of use. If an object is difficult to use it gets thrown away. For example, if a chair is poorly designed and contributes to back pain, we find a better chair. However, we need to remember that technology and design are just tools for Technique.
What we observe with the harm of FIFO work is that the health and well-being of persons is the price paid for earning high incomes. FIFO is the result of Technique! It is far more efficient for companies to do Fly-in-Fly-Out than it is to establish communities in situ that offer genuine community life. Instead there are a host of products on the safety market (https://wideawakewellness.com.au/) that promote the idea of accepting the harm of FIFO and living with it. This also includes the promotion of amateur and dangerous approaches to mental health (https://safetyrisk.net/playing-with-mental-health-in-safety-is-dangerous/). The same applies for the efficiencies of ‘Hot Desking’ (https://safetyrisk.net/if-psychosocial-health-matters-stop-hot-desking/). Again, many organisations that espouse zero place the ideology of Technique above the humanising of persons. This is how Technique works.
In all of this we observe a profound ‘faith’ in Technique most obvious in the naïve belief that Science or Technology will save us from our own self-destruction. I use the word ‘faith’ here because there is simply no evidence for such a belief. Indeed, recent ‘faith in Artificial Intelligence (AI) is much the same. Indeed, much of the faith held in AI, is pure mythology. Further read: Madsbjerg Sensemaking: The Power of the Humanities in the Age of the Algorithm. We also see this in the nonsense language of ‘machine learning’. Machines don’t ‘learn’. Shifting cognitions and data is NOT learning.
Technique has a force and power to itself and a momentum like any Archetype. Sometimes this is masked by language of ‘innovation’, ‘efficiency’ or ‘care’. Yet, deep down, and much long after the introduction of whatever technology we see more harm (https://one.oecd.org/document/EDU/WKP%282019%293/En/pdf).
Now, don’t get me wrong, I am not a Luddite. I have no quest to destroy technologies but rather wish to draw attention to ideologies that demonise persons for the sake of something else. We see this all the time in the risk and safety industry that seems to declare that humans are the problem. A recent example is present in the marketing of ‘presilience’ (https://safetyrisk.net/there-is-no-going-beyond-resilience/) where the human emotions are demonised as dangerous.
Wherever Technique is in force, human persons come off second best.
Sometimes, the best way to work with humans in tackling risk is inefficient. For example, conversational observations, walk arounds and taking time to get to know people is incredibly inefficient. Yet, these ways of engaging people have excellent outcomes for building relationships, establishing understanding and motivation.
I was once broken down in NSW and called NRMA (that used an overseas call centre) and the person at the other end was trying to help through GSP etc. None of this worked instead, I was lucky a friendly local farmer offered help and pulled me out of the ditch.
In SPoR, we study that nature of Technique and the impotence of an ‘Ethic of Personhood’ (https://cllr.com.au/product/an-ethic-of-risk-workshop-unit-17-elearning/). This study creates an awareness of the force of Technique in the risk and safety industry and how this ideology demonises persons. In the risk and safety industry, there is no greater quest for efficiency that the ideology and absolute of zero. In safety it doesn’t matter what you do to humans just as long as the injury rate is zero. Under this ideology the end justifies the means and so, harming humans on the way to zero is somehow justified as ‘good’.
In risk and safety, the beginning of humanising persons in tackling risk is an understanding of ethics not a fixation on ‘safety innovation’. Indeed, without a study of ethics it is most likely that any innovation will be declared ‘good’ or ‘neutral’, when it is not. Interestingly, the notion of ethics is not mentioned in the latest quest for innovation being conducted by the Human and Organisational Performance (HOP) movement (https://www.safetyinnovation.org/). In HOP the language is on ‘working better’, ‘building capacity’ and ‘improving work’ not on ethical outcomes for human persons. Indeed, a consciousness of ethics is found nowhere in this movement as if any ‘performance’ is objective or neutral. It is not.
This is how Technique works best, in the discourse of ‘improvement’ without consideration of an ethic of persons in whatever is ‘innovated’. Unless innovation or learning considers ethics, it is most likely just more Technique at work.
SPoR Convention Canberra 13-17 May 2024
(Photo from previous International SPoR Conference Dinner held in Vienna in 2023)
We are pleased to announce the SPoR International Conference for 2024. This year we have presenters from the UK, USA, Canada, NZ, Australia and Portugal as well as presenters from Australia.
This a conference that emerges out of the global network of people and organisations practicing SPoR in how they tackle risk. It features presenters from across many industries and countries implementing the methods of SPoR in workplaces and organisations.
The conference offers increased knowledge in the practicalities of SPoR, implementation and challenges. Those who attend will all get an opportunity to present their experiences and practice of SPoR in a series of short 15 minute Vignettes.
Come and meet others, all with their own SPoR story. Share about challenges and listen to what others are doing with SPoR in how they work, live and being.
• Dr Nippin Anand (UK – Can We Learn From Accidents?) • Dr Pedro Ferreira (Portugal – My Story with SPoR) • Rosa Carillo ( USA – Women and Risk, Voices of Resistance) • Gabrielle Carlton (Canberra – Everyday Social Resilience) • Larry Snead (USA – SPoR Methods and Change) • Frank Garrett (Canada – SPoR as a Disposition Towards Risk) • Dr Craig Ashhurst (Canberra – Personality and Collective Coherence) • Dr Robert Long (Canberra – Personhood and Risk) • Matt Thorne (Adelaide- Convenor)
Two workshops are on offer separated by a one-day semiotic walk that includes a winery tour and dinner.
• Workshop One and topic (13,14 May): Personhood, Personality and Risk • Workshop Two and topic: (16-17 May): Everyday Social Resilience
Each 2 day workshop fee is $1250 (aus). Total for the week (both workshops) is $2500 (plus GST) Early bird discount of 20% total $2000 (aus) (plus GST) Email Dr Long for your early bird registration: rob@spor.com.au
Workshop meals catered. Location: Ballroom – Tuggeranong Community Centre (245 Cowlishaw St, Greenway) Accommodation options: Alpha Hotel Tuggeranong (https://alphahotelcanberra.com.au/) (Or ample Air bnb locally)
We look forward to another successful International conference to follow on from the same in 2023.
This conference is open for anyone interested in the Social Psychology of Risk
If you have any questions or queries, email: rob@spor.com.au
SPoR Workshop in London 20-22 March 2024
For those interested in learning about SPoR in the Northern Hemisphere, you can register for the next workshop in London, delivered by Dr Nippin Anand and Dr Pedro Ferreira from 220-22 march in London.
We are pleased to announce that Matt Thorne will be conducting a series of workshops on SPoR in the USA from late May to June. This follows on from Matt’s successful work in Europe last year and his recent work in India. (Pic is of Matt at work in Vienna workshop).
Matt also recently completed three free global workshops in an Introduction to SPoR Methods and received excellent feedback including bookings for workshops in the USA.
Keep youre eye out for his series of podcasts with Gerd Gigerenzer, soon to be released.
If you are interested in Matt’s workshops in the USA or in registering for his next free workshops on SPoR please contact him at: matthew@riskdiversity.com.au
AI, Faith, Religion and Ethics
One of the most significant issues associated with technology, AI and so called ‘machine learning’ is the ability to sift through all the hype and marketing to discover substance and meaning. So much of what is marketed as ‘learning’ and ‘intelligence’ is just computer algorithms in the collection and retrieval of data.
In all of this it is critical to understand that data, design and technology are neither neutral or objective. All AI and computing is designed and carries the bias of the designer, including the affordances of the design. It is also unfortunate that many in the field of AI have very little education in Ethics. It is as if any development in technology is deemed good.
Hasselbalch (2021) Data Ethics of Power, really puts the issue of Ethics and technology in perspective. I find it quite amusing in the risk and safety world that moving a paperbased checklist to a device-based checklist is deemed somehow ‘good’. The success of safetyculture.com and iAuditor is evidence of this. No wonder this company (that doesn’t tackle the nature of culture or safety) is worth billions.
A checklist whether on paper or device is a checklist, designed by someone and bears their bias in design. Most often these apps are designed by engineers or behaviourists who mistake checklists for tackling risk and safety. Most often the production of a checklist is confused as an outcome and product of safety. Greg Smith demonstrates in his book Papersafe (https://www.youtube.com/watch?v=n-qhOotn00g) that, the completion of a checklist is NOT a demonstration of safety!
In all this hype about AI we also need to be critical about the language used in the promotion of AI and hype that shows an underlying belief system. For example: recently published in Neuroscience News we have this https://neurosciencenews.com/ai-neurotheology-religion-25577/.
Just look at the language of this discourse, apparently AI is ‘revolutionary’. This apparent ‘revolution is simply about the access of religious data. Accessing data is not a revolution, neither is data about faith or religion. Faith is a very human quality indeed, AI can never know what faith is. AI cannot ‘feel’ or connect with all of the e-motions generated by embodied knowing. Without a human body, computers can never know the nature of ritual, ceremony, mythology or embodied meaning. Indeed, embodied knowing is unique to fallible humans, and neither can computers understand or know fallibility.
Look further at the hype, there can never be such a thing as ‘AI-Guru’. This is all just more evidence of the bias of the writer of this article. Indeed, the faith of the writer in AI is quite cultic. It does this by focusing on data rather than the person/human who interprets data. It even associates the access to data as ‘vision’. Vision is very mystical and existential language applied to a computer that cannot see like a human and can never know what an epiphany is. As for AI assisting in achieving ‘purushartha’ is pure fantasy. The four Purusharthas are: Dharma (righteousness, moral values), Artha (prosperity, economic values), Kama (pleasure, love, psychological values) and Moksha (liberation, spiritual values, self-actualization). How can a machine/computer ‘know’ any of these? I wonder how a computer feels pleasure or love??? What absolute nonsense.
AI cannot ‘interpret’ data unless it has been programmed with a moral philosophy and ethic. Computers are ‘programmed’ or regenerate algorithms according to the bias in its design. A computer cannot ‘choose’ the hermeneutic on which it operates.
The article suggest that AI will one day become ‘a co-pilot for all human endeavours including decision-making’. Again, this is just a statement and wish from an AI cultist. I certainly won’t be turning to a computer to be co-piloting my understanding of faith. Indeed, as Kierkegaard rightfully states, faith is absurd. How can a computer understand and ‘believe’ what is absurd?
Then we get this: ‘For maintaining righteousness or dharma in the world, the algorithm and AI language models must be trained on data that is truthful and morally right’.
What absolute nonsense. And who will program the AI in what is true and morally right? Whose morality? What ethic? Whose truth? How can a computer understand the personhood of being human?
So, when it comes to critical thinking, just deconstruct the many faith statements in this article and you will see that the author is conflicted between his faith in AI and faith in faith.
Sex Robots, AI and Ethics
One area that really draws out a need for a well-developed ethic and moral philosophy concerning personhood is the problem of AI and sex robotics.
The following provide some background and overview for your consideration:
All of this raises enormous issues regarding an understanding of personhood. And, there is no simple response to those who argue that it doesn’t matter what one does to an object. That is, regardless of what is simulated, one is not harming a human and so this is OK indeed, some argue this development is beneficial to humans.
Some books that discuss the issue are here:
• Abel, T., (2020) Artificial Intelligence Ethics and Debates. Focus Readers. Mankato. • Liao, M., (ed.) (2020). Ethics of Artificial Intelligence. Oxford. London • Vieweg, S., (ed.) (2021) AI for the Good, Artificial Intelligence and Ethics. Springer. London. • Jansen, S., (2022). What was Artificial Intelligence? Mediastudies Press. Bethlehem, PA., • Boddington, P., (2023). AI Ethics, A Textbook. Springer. London. • Floridi, L., (2023). The Ethics of Artificial Intelligence, Principles, Challenges and Opportunities. Oxford. London. • Hampton, A., and DeFalco, J., (2022). The Frontlines of Artificial Intelligence Ethics. Human-Centric Perspectives on Technology’s Advance. Routledge. New York.
Why is this relevant to the world of risk and safety?
One of the big problems with the risk and safety industry is that it has no well-articulated ethic. This enables extensive unethical decision-making justified by ideologies such as Zero or ‘safety is a choice you make’.
What this debate about AI and sex-dolls brings to the surface the many issues associated with computing, data and AI. These technologies are NOT objective or neutral.
For those in the risk and safety world one needs to always consider what are the by-products and trade-offs in the quest for Technique.
Associated issues include:
• The problem of consciousness • The body-mind problem • Decision making • Mutuality • Embodied e-motion • The nature of human will • The nature of ‘agency’ • The absence of consent without will and moral agency • Moral philosophy • Ethics • Paedophilia
The current AIHS BoK Chapter on Ethics is simply a justification of duty to safety and says nothing about morality, personhood, care, consciousness, decision-making, human being, humanising risk, power or a host of critical ethical issues.
This free module isn open to anyone intersted in learning about SPoR and risk. The study of signs and symbol systems in relation to decision making ought to be foundational for anyone wondering why people do what they do.
This concern is all founded on the mythology of AI as ‘intelligent’ and the myth of -brain-centrism. Such mythology and fear markets the idea that AI is NOT subject to the control, design or input of humans under the mythology that AI can self-generate and regenerate. Mostly these myths are science fiction based on miniseries Humans or the movie Robot.
Do you have any thoughts? Please share them below