SPoR Convention Announced Canberra September 2025
Lock in the dates, the SPoR Convention for 2025 in Canberra is set for 15-19 September. The key facilitators are Dr Craig Ashhurst and Dr Robert Long. Other presenters will be announced as time progresses. We already have some from New Zealand and the USA, Canada committed to be there. We’re hoping to announce soon some special presenters who have left the world of safety and are practicing SPoR in the professions. You can download the flyer and register here: https://spor.com.au/spor-convention-2025/ This year the focus is on Critical Thinking with the middle day being a semiotic Walk Day followed by a Dinner. The four-day program will focus on the following: Day 1. What is critical thinking? Skills in Critical Thinking: Critical Discourse Analysis (CDA), Linguistic & Semiotic omissions. Detecting propaganda. Day 2. What is beyond critical thinking? Poetics, coherence and the non-measurable. Detecting Technique, What is Discernment? Sifting and Analysing Discourse/discourse. What ethic governs analysis? Interrogating by moral philosophy? By what Ethic shall we live? Is our Ontology coherent? Day 3. Semiotic Walk Day Day 4. The positives of Deconstruction – the process and purpose of Reconstruction Cognitive Dissonance, Disturbance, Disruption and Comfort. What is the constant? Where do we find security in Faith? Day 5. E-motion, movement and learning in what’s critical. What is learning? How do people learn? Who is the Educated Person? What is conversion? Tools to take away, skills to practice. Using SPoR to reframe tackling risk, understand learning and helping others envision what’s critical and what is not. Anyone is welcome who wants to learn about SPoR or who wants to meet people practicing SPoR in the workplace. You don’t need to have studied previously to join! Catering All meals are catered except for the dinner on the Wednesday. Materials All study materials, methods, tools and Education kit are included. Location/Venue Communities at Work Tuggeranong in the Ballroom 245 Cowlishaw St, Greenway ACT 2900. Planning So, plenty of time to plan and register. Fees The cost for the week including all meals, materials, venue etc is $1600 ($400 per day) If some want to register for a single or double day please contact: admin@spor.com.au The semiotic walk day is free. You can register and download the flyer here: https://spor.com.au/spor-convention-2025/ Presentation Styles and Learning All those who attend will be allotted time (voluntarily) to present on what they are doing in their work world to tackle risk.
|
||||
Theme: Is AI Safe? Part 1.
Introduction In this newsletter the theme is delivered in several ‘parts’ interspersed with other news. The propaganda machine for AI is accelerating at full speed with huge implications for WHS and AI Safety. The latest is that, AI can identify crime a week before it happens (https://biologicalsciences.uchicago.edu/news/algorithm-predicts-crime-police-bias). Most of the reports on this are lazy ‘cut and paste’ non-journalism. And it’s this kind of stuff that never raises the challenges or ethics of moral philosophy. Most of what is reported in this story has no relevance to the problems of crime or criminal behaviour. Most of this kind of reporting says much more about faith-in-AI than any reality for anything useful or real useability. Of course, any research into AI and crime detection doesn’t mention the longitudinal biases in AI in crime prevention including, biases against women, the poor, disadvantaged, indigenous groups or inbuilt racism (Vallor, S., 2024. The AI Mirror, Oxford Uni Press, London). Sometimes the hyper-propaganda machine when in full fantasy, suggests that AI is sentient. Fortunately, this delusion is quickly blown out of the water by experts in the field. Purpose The purpose of this paper is directed to the risk and safety industry and urges caution in the adoption of AI into the sector. The more I research and read about AI, the more I realise that belief in AI is a faith-system that says more about the presenter than about the realities of AI. Indeed, the many myths that have been created about AI that have been popularised, have simply endorsed the myths of utopian hope for miracles. There is simply no evidence for much of the fear-mongering that suggest that AI will take over and destroy humans. This shares as much reality as War of the Worlds, Westworld or The Matrix. All data is interpreted. No data is neutral/objective. AI are data machines. AI cannot ‘think’ nor ‘understand’ what it does. It cannot and never can ‘know’, the moral or embodied being of living as a fallible person in the world. AI is not human and never can be. AI simply mirrors and replicates the algorithms programmed into it. AI doesn’t ‘learn’ in the real meaning of the word even though the language of ‘machine learning’ is applied to it. Machines are trained to train, replicate algorithms but can never know anything about lived experience. The idea that humans are predictive as this non-news story suggests, is based on faith in big-data as fantasy, all wrapped up in a marketing strategy. The only way to have faith in prediction is to deny the fallibility of human persons, the world and mortality. We see the faux promises of predictive analytics (https://safetyiq.com/insight/how-to-use-ai-to-improve-workplace-safety-a-predictive-analytics-model/) and promises of revolutionising safety but this just demonstrates more faith in data. Most of this stuff should be in the fiction section of the library. The data being fed into AI about crime in this story comes from 5 cities in the USA where historical data demonstrates extraordinary bias against minority groups. What is more, algorithms are never going to produce arrest warrants for crimes not committed. All this research will do, is amplify the bias of the system fed into it and increase surveillance, profiling, and general targeting compared to their white counterparts. Of course, making everything worse and less safe. Faith in AI Faith-in-AI has little thought about bias, ethics, moral meaning, the abuse of power and rarely considers by-products. As Kallas states (https://www.dqindia.com/interview/can-ai-truly-predict-crime-insights-from-george-kailas-7655793): ‘Law enforcement’s actions based on this technology could result in deeper mistrust of the law within these communities’ And it doesn’t help even if one tries to take the bias out of the data fed into AI. Vallor (2024, p. 55ff) demonstrates that tinkering with the bias of data or taking out words or language in the data makes little difference and most often invalidates the input and outcome. Faith in AI is founded on a number of myths fostered by scientism and engineering, these are that:
All of these are myths are fed by propaganda from vested interests. The Failures of AI So, let’s look at some reality. AI failures are extensive and harmful. And AI doesn’t get any more ethical or moral, no matter what is fed into it. Do some research. Here are just 10 examples: 1. https://www.evidentlyai.com/blog/ai-failures-examples 2. https://tech.co/news/list-ai-failures-mistakes-errors 3. https://hai.stanford.edu/news/when-ai-systems-systemically-fail 5. https://www.pmi.org/blog/why-most-ai-projects-fail 6. https://www.boredpanda.com/ai-fails/ 7. What Happens When AI Fails? Who Is Responsible?: https://www.youtube.com/watch?v=zWbvnbmudrE 8. https://www.oreilly.com/radar/what-to-do-when-ai-fails/ 9. https://hdsr.mitpress.mit.edu/pub/1yo82mqa/release/2 10. https://medium.com/educreation/whats-interesting-is-how-ai-fails-88186aabb30 Here is further research into specific fields. |
||||
SPoR Chat Room, Research and Connections
For those who want to keep in touch with everything SPoR there are several places you can chat and connect. One place is here: https://safetyrisk.net/ The other is a new site where you can meet, learn and share with people practicing SPoR from all over the globe. SPoR has several servers on Discord and meeting/chat rooms where people share their journey, research and practice. If you would like to join this group, it is free and easy to use. If you connect on either riskex or on Discord, you won’t miss any announcements or updates. SPoR no longer has any presence of facebook, whatsapp or any other social media platform. You can join and ask for an invite by writing here: admin@spor.com.au
|
||||
Book Launch – Real Meeting
We are proud to help launch anew book released by Aneta and Brian Darlington: Real Meeting, Leadership is Time and a Simple Cup of Coffee. The book is on sale here: https://www.humandymensions.com/product/real-meeting-a-book-on-being-in-leadership/ You can also write to Aneta (aneta.darlington@icloud.com) and visit their Website Embodied Leadership https://www.embodied-leadership.eu/ and purchase the book directly. This is the second book published by the Darlingtons. Brian previously wrote the book with Dr Long: It Works, A New Approach to Risk and Safety (A case study of imploementing SPoR in a global organisation) You can download this book for free here: https://www.humandymensions.com/product/it-works-a-new-approach-to-risk-and-safety-book-for-free-download/ About the book Real Meeting So often, the literature in Leadership is flooded with famous people, generals, athletes and ex-CEOs. Most of what circulates in this space is not about Leadership but about ‘heroship’. Real leadership is only as good as its followership (https://www.humandymensions.com/product/following-leading-risk/). A classic in the heroship space is the tale told by Simon Sinek about how he justified pushing in line to get a free bagel (https://www.youtube.com/watch?v=tif2D-rQ0fM). That’s right, so many are conned by Sinek’s rhetoric but his ethic is appalling. Without a real ethic of service-humility, there is no Leadership. And Sinek is one of the worst in the heroship genre of non-leadership presentations. This story by Sinek stuff demonstrates that he can’t see people, he only sees objects in the way to achieving his selfishness (a vice, not a virtue). And he states it as such! This is i-it! NOT i-thou. The book Real Meeting is anchored to the work of Buber (i-thou) who stated that all life is meeting. Without an understanding of the dynamic of i-thou, you will never understand either real Leadership or Real Meeting. You can download Buber for free here: https://www.maximusveritas.com/wp-content/uploads/2016/04/iandthou.pdf When you think you can push in at the front of a line of 200 people who have been waiting for a bagel, your ethic is driven by self NOT Meeting the other. When you think risk and safety is all about telling NOT listening, your ethic is driven by self not by Meeting the other. The opposite of Leadership is living in the i-it like Sinek. The key to Leadership is living in the Meeting of i-thou so that the Leader-Follower dynamic is the foundation for leading. In Aneta and Brian’s book they go into extensive discussion about what happens when we hold meetings but never ‘meet’ anyone. These are of meetings of distrust, formality, process, performance, ego, arrogance and minutes of meeting. BUT, no ‘Meeting’ has occurred. There has been no connection! There is no book on the market in Leadership like this. The way Aneta and Brian explain Buber’s i-thou, is accessible and builds on the concept. They go into great semiotic detail about what the hyphen means in i-thou (Chapter 3). This discussion makes i-thou come alive and extracts what i-thou means in practice. This is how SPoR is practiced in Mondi Group. There is no book on the market that explains i-thou semiotically except for this book. When we can visualise semiotically the hyphen as an extension of time then we can ‘see’ just how much we move towards the other to connect and ‘meet’. Hence the subtitle of the book ‘Leadership is Time and a Simple Cup of Coffee’. In Sinek heroship, the reason why one pushes in front of the line is to save time because heroes don’t wait’, all they see is obstacles to their ego and self (i-it). Yet people pay absurd amounts of money to see a conman spruik heroship. This is the delusion of performance. This is what one gets from any focus or primacy on ‘performance’. In Real Meeting we leave performance and whatever one measures as performance behind. If you focus is performance, you won’t ‘Meet’ anyone. Performance is one of the greatest inhibitors for Meeting. Performance is one of the greatest inhibitors for Learning (https://safetyrisk.net/understanding-the-nature-of-performance-and-hop/). If you are interested in understanding Buber and i-thou further, we have a 5 session Module on offer: https://cllr.com.au/product/buber-i-thou-and-risk-module-27/ BTW, Performance makes time a measure of outcome. When we experience Real Meeting, time becomes irrelevant. This is why in SPoR we have no interest in the delusions of HOP. When your interest is in how people and organisations perform, you are already trapped in Technique (Ellul). In SPoR we are not interested in how people or organisations ‘perform’ in safety. What we are interested in SPoR is, Meeting you in understanding how you tackle risk. If you want to learn about SPoR and how it shifts away from performance to improve safety so that what you do is practical, positive and enriching, you can write here: admin@spor.com.au and, we will let you know when a course is on near you. Or if you want, you can access the free online Introduction to SPoR Module.
|
||||
Is AI Safe? Part 2.
Medical Care AI is being rapidly adopted in Health Care, but is it good or safe? Let’s see:
Automated Vehicles We were being fed propaganda since 2013 that automated cars would be flooding our roads in 10 years. Not so. This is just more fantasy. Despite all the projections of faith in AI, it simply cannot adapt and ‘think’ as humans do. The only automated vehicles in existence are in controlled environments or on rail.
AI Dumbs Us Down One of the main by-products of faith in AI is that it dumbs us down. The more we rely on being spoon fed data by AI the less critical we become, particularly in consideration of by-products, imagination and ethical concerns. The following is an example.
Fake Cases Of course, the errors and limitations of AI are evident in all fields such as Law, Education, Medicine and Multi-media. Here are just a few examples:
The Dreamers and Faith-Healers As in all cults and faith systems, there are those who maintain the faith against all the evidence. The belief is, that this is all just about ‘hiccups’ and ‘teething problems’ and that things will get better and better with AI over time. Vallor calls this kind of faith-group, the ‘longtimers’. This faith-belief is founded on the original mythology and attributions of ‘bad faith’. The faith in myths is not real, neither can AI faiths ever be real, as long as the myth of the human brain as computer metaphor continues in popular AI mythology. All of this, including the transhumanism cult (https://www.forbes.com/sites/julianvigo/2018/09/24/the-ethics-of-transhumanism-and-the-cult-of-futurist-biotech/) is a religious faith system. We see this faith system most prominent in the delusions of Elon Musk. Musk desires the realisation of Nietzsche’s ‘ubermench’ (superman) and with it all the vices of a megalomaniac. The myths of transhumanism are maintained by extensive semiotics based on the myth of brain-as-computer. None of this is true. We even have absurd ideas that AI can somehow have some understanding of the unconscious and dreaming:
With all this in Mind, now let’s turn our attention to the risk and safety industry. Metaphors, Meaning and Language Whilst people get excited about Large language Models of AI, there can never be any AI that ‘understands’ what it generates. AI has no lived experience nor embodied being to understand the meaning of language, its anchored gestures or the contradiction of metaphorical language> All metaphors are learned through gestural experience and understand text by stating what something isn’t. Poetical language cannot be ‘understood’ by AI. Life is not lived in a dictionary understanding of text. We know a great deal as humans by what is NOT said and by interpreting paralinguistic knowing. Detecting if Someone uses Chat GPT Push-Back Not all of the spin and propaganda about AI hits the spot. Remember Google Glass? Remember the billions spent on prediction that we would all be wearing glasses that can read the environment and others? It is the tale of yet another spectacular failure (https://www.investopedia.com/articles/investing/052115/how-why-google-glass-failed.asp). It turns out that humans don’t want to be scanned and probed for data. This is just one example of push-back against the delusions that AI will ‘take over’. As much as some geeks would love face recognition everywhere, many push back because they understand the ethical need for privacy. Academia We now know that as more AI is being brought into many sectors that it also brings with it unseen trade-offs and by-products that work against the very foundations of the the profession that adopts it.
|
||||
Is AI Safe? – Two Videos
Recently, Dr Craig Ashhurst, Dr Nippin Anand, Greg Smith and Dr Rob Long discussed the adoption of AI into the world of risk and safety. There are two videos here: Video One https://vimeo.com/1069071797 and here: Video Two https://vimeo.com/1069073815 Each one of these scholars considers some of the basics of AI in relation to their area of expertise. These being: the Law, Anthropology, Wickedity, Propaganda and Ethics. The idea that AI is somehow neutral and objective is torn to shreds by the discussion that puts forward some of the profound concerns of this group. None in the group are ‘Luddites’ nor anti-technology but raise some fascinating by-products and trade-offs to consider amongst all the hype of what AI can do for safety. Indeed, many of the claims made about AI in safety are based on fantasy and wishful thinking. Not reality.
|
||||
Is AI Safe? Part 3 Risk and Safety
This kind of propaganda we see to the right is part of the mythology of AI faith and safety. AI is NOT a game changer in safety. Let’s explore the way in which AI is being marketed in and to the risk and safety sector. We will do this by discussing some current AI ideas being marketed to the risk and safety industry. An AI Risk Score Card Firstly, AI cannot ‘sense’ risk, neither can it ‘understand’ behaviours or ‘think’. Secondly, AI has no ‘sense’ of emotional pressures in work or any comprehension of hidden and unconscious factors that motivate and facilitate decision making. It cannot determine heuristics or sense psychosocial pressures that cannot be measured or observed. (All of the key drivers of risk and safety decision making cannot be measured). Thirdly, AI has no real-world experience nor lived experience in the workplace. All the data fed into AI about hazards and risks is about past events. In this way, AI is a machine that talks about the past, it can have no ‘imagination’ about the future. Avoid this tool at all costs. It is a dangerous, harmful and misleading idea. Risk Assessment AI does not ‘revolutionize’ risk assessment and at best can only compile lists of what is fed into it from the past. AI cannot predict what it doesn’t know and has no ‘imagination’ with which to ‘explore’ risk potential. Just look at the language of this propaganda. The first line is a give-away: ‘The capabilities of AI do not stop to amaze as technology continues to evolve.’ Language such as ‘amaze’ and ‘evolve’ are faith statements based on attribution of the author. There is no evidence for either of these claims. AI has no ‘life’ with which to ‘evolve’. Everything in this article is about AI-faith loaded with emotive language and promises and direct mis-information about the capabilities AI. Eg. its use in the health sector has been a disaster and it is not instrumental in the sector. AI cannot predict hazards and it cannot replace inspectors. Unchecked AI has been demonstrated to be dangerous and harmful. Everything in this article is not news but rather propaganda. None of the claims are true neither supported by evidence. It is nothing more than a faith in AI ‘puff piece’. Look at the language such as ‘AI insight’, this is fairy-tale stuff. Avoid AI in safety propaganda like this. Journalism and AI One of the by-products of using AI in some professions is already a loss of integrity, critical thinking and rising laziness:
Movie Myths Generating AI Myths We know that many confuse the myths of the movies with real life indeed, that many conspiracy theories come from what AI can do on the screen. This raises a great concern about the need for critical thinking.
AI Misinformation and Disinformation Similar mis-information and dis-information abound such as:
A sure give-away in all of this propaganda/spin is never any mention of ethics, by-products, need for caution, problems and limitations. The propaganda of safety and AI is populated by absurd claims of benefits without evidence and little mention of risk and harms to persons. Most of what appears in the pages of risk and safety journals about AI is sheer fantasy, wishing, naïve promises and a demonstrable faith in AI. Most of the stuff purporting to be about AI in the risk and safety industry is marketing, spin and fiction. The evidence for this is overwhelming. All one has to do is undertake Critical Discourse Analysis (CDA) of any text in safety about AI and analyse its metaphors, emotive claims and lack of evidence. Language and AI, Metaphors and Myths The best way to pull apart the myth and faith statements made about AI is to use critical thinking, critical discourse analysis and skills in linguistics to deconstruct and challenge truth claims. Just look at the metaphors used in a text. Look at the truth claims and deconstruct the language in the text.
AI Can’t ‘Learn’ The language of ‘learning’ and ‘intelligence’ for Artificial Intelligence is of course a misnomer. Some of the best stuff that discredits such belief is by Roger Penrose (English mathematician, mathematical physicist, philosopher of science and Nobel Laureate in Physic):
Mimetics is not How Humans Acquire Language The problem with Large Language Models and how humans acquire language is not supported by any evidence about how humans acquire language. The behaviourist idea that humans learn language through mimetics was torn to shreds by Chomsky 50n years ago. Some better research is by Fuchs:
The Challenges of AI for the Risk and Safety Industry The following are major challenges faced by the risk and safety industry concerning AI. 1. Unfortunately, the risk and safety industry have no well-developed, mature or intelligent Ethic of Risk. Eg, the AIHS BoK Chapter on Ethics is one of the most unprofessional and amateurish pieces ever published on ethics. This BoK chapter makes no mention of the nature of power, confuses morality and ethics, makes no mention of care or relational ethics, proposes naïve approaches to ethics like ‘check your gut’ and aligns ethics to duty without any reference to deontological bias. Unless the industry develops a mature and balanced approach to ethics, it will never be professional.
8. When you want to know anything about AI, culture, Linguistics, mental health or Ethics, Safety sees that the best to consult are those with no expertise in the matter. Eg.
9. Unfortunately, without Critical Discourse Analysis it is relatively easy to sell silver bullets and mythical ideology to an industry desperate to achieve zero. This fosters a naïve belief and faith in AI as if AI can do human work. If risk and safety were only about crunching data then AI would be useful. But, all that is human, fallible, mortal and subjective about lived experience is beyond AI. Most importantly, AI has no perception to interpret data in a moral and ethical way for the good of persons.
|
||||
Free iCue Engagement Workshops
Dr Long is offerring frre workshops the SPoR iCue Method for those who want to learn. You can apply simply by writing to admin@spor.com.au Place are limited. Thw iCue Method is one of the founding methods of SPoR and has a focus on developing skilled observations and conversations on site. |
||||
SPoR Workshop London SEEK investigations 14-16 May
Dr Nippin Anand is offerring the SEEK INvestigations Workshop in London 14-16 May. What a wonderful opportunity for those in Europe or North America to access this wonderful program. You can see an outline of the program and register here: https://novellus.solutions/mec-events/032024-517/ If you don’t know about Nippin’s work you can purchase his excellent book Are We Learning from Accidents? here: https://novellus.solutions/shop/
|
||||
Workshop Aberdeen Semiotics and Culture 11-13 June This is a remarkeable experiential learning workshop for thios who want to learn and feel in learning about semiotics. The program can be viewed here: https://novellus.solutions/mec-events/social-psychology-of-risk-conference-europe-422/ Learning Semiotics takes you to a whole new level in understanding the way we message to the human unconscious. This learning takes one out of the idea that culture can be known propositionally. Yet, when we are immersed in the Semiophere (Lotman) we realise that all our decision making is semiotic.
|
||||
Book Competition – Preception |
![]() |
Here is your chance to win a copy of either Nippin’s book: Are We Learning from Accidents? or, a copy of Brian and Aneta’s new book Real Meeting. When you submit your entry please nominate which book you would like and of course, your snail mail address for postage. Copies will go to the first 4 successful entries. Please note, most entries are received in the first hour of the release of this newsletter. Send entries to: admin@spor.com.au The competition: The photo you see above has a squirrel in it. All you need to do is find it, and send your entry to admin@spor.com.au So many peope are surprised when people don’t see things and most often think perception is about common sense. There is no such thing as common sense rather, all perception is culturally conditioned. |
Is AI Safe? Part 4
What to Do
References The following texts were used in the production of this extended research of this newsletter: Atkinson, R., and Moschella, D., (2024) Technology Fears and Scapegoats. 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy. Palgrave Macmillan. Cham, Switzerland.
|
||||
SPoR Coaching
Many approach us about the SPoR curriculum: https://cllr.com.au/elearning/ As you will see, SPoR offers 31 modules of learning. The most common way people study these modules is by: formal workshops, free modules offered (such the one currently being run on Ethics) and modules such as will be conducted at the Convention in September (https://safetyrisk.net/spor-convention-2025-canberra-15-to19-september/), or by in-house workshops, where organisations engage experts from CLLR to conduct studies or by one to one coaching. An example is Dr Nippin Anand conducting the SEEK module throughout Europe or Mondi Group training every Safety Manager in the organisation in 12 SPoR modules. All studies are supported by an extensive suite of videos (over 300) and many free books. Coaching in SPoR involves completion of the first four foundational modules (1-4) and then after that, studies work on an elective basis. This can all be completed online. Studies in SPoR are self-certified by the Centre for Leadership and Learning in Risk (CLLR). Those who complete 8 or 12 modules receive formal certification of their studies including a letter (from Dr Long and Dr Ashhurst) certifying knowledge and skills learned. If coaching is an option you would like to consider then, this means a module of interest is chosen and coach is assigned. The participant is then sent a range of expectations such as: watch a series of videos (usually that of Dr Long and Dr Ashhurst training in situ), keep a journal, do some readings, email exchange, other writing expectations and engage in 4 zoom sessions with the coach. Our current designated coach is Matt Thorne who is working with a group of individuals from all over the world want to learn more about SPoR. You can read about Matt here: https://www.humandymensions.com/our-people/matt-thorne/ Matt has completed all 31 modules in SPoR and has been teaching with Dr Long in Europe and India over the past few years. If you would like to start a study in SPoR journey contact Matt directly at: Matt@riskdiversity.com.au
|
||||
Links and Freebies We recently celebrated 1,000,000 downloads in SPoR (https://safetyrisk.net/a-million-downloads/). We also tipped over 2000 blogs. No-one in the risk and safety industry offers quality material for free as we do. What this demonstrates is that many in the risk and safety industry are eager to read and watch the SPoR perspective on risk, culture and learning. Thanks for your support. We don’t wish any impediment to those wanting to learn SPoR. We also have free course and these are offered regularly. Just subscribe to https://safetyrisk.net/ and you won’t miss out on any announcements. You can find you freebies and downloads below. Books Videos The Law and Due Diligence Videos against Zero Ideology Semiotics Videos Papers Newsletter Archive Podcasts Blogs Contacts and Websites
|
Do you have any thoughts? Please share them below