• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

SafetyRisk.net

Humanising Safety and Embracing Real Risk

  • Home
    • About
      • Privacy Policy
      • Contact
  • FREE
    • Slogans
      • Researchers Reveal the Top 10 Most Effective Safety Slogans Of All Time
      • When Slogans Don’t Work
      • CLASSIC, FAMOUS and INFAMOUS SAFETY QUOTES
      • 500 OF THE BEST AND WORST WORKPLACE HEALTH and SAFETY SLOGANS 2023
      • CATCHY and FUNNY SAFETY SLOGANS FOR THE WORKPLACE
      • COVID-19 (Coronavirus, Omicron) Health and Safety Slogans and Quotes for the Workplace
      • Safety Acronyms
      • You know Where You Can Stick Your Safety Slogans
      • Sayings, Slogans, Aphorisms and the Discourse of Simple
      • Spanish Safety Slogans – Consignas de seguridad
      • Safety Slogans List
      • Road Safety Slogans 2023
      • How to write your own safety slogans
      • Why Are Safety Slogans Important
      • Safety Slogans Don’t Save Lives
      • 40 Free Safety Slogans For the Workplace
      • Safety Slogans for Work
    • FREE SAFETY eBOOKS
    • Free Hotel and Resort Risk Management Checklist
    • FREE DOWNLOADS
    • TOP 50
    • FREE RISK ASSESSMENT FORMS
    • Find a Safety Consultant
    • Free Safety Program Documents
    • Psychology Of Safety
    • Safety Ideas That Work
    • HEALTH and SAFETY MANUALS
    • FREE SAFE WORK METHOD STATEMENT RESOURCES
    • Whats New In Safety
    • FUN SAFETY STUFF
    • Health and Safety Training
    • SAFETY COURSES
    • Safety Training Needs Analysis and Matrix
    • Top 20 Safety Books
    • This Toaster Is Hot
    • Free Covid-19 Toolbox Talks
    • Download Page – Please Be Patient With Larger Files…….
    • SAFETY IMAGES, Photos, Unsafe Pictures and Funny Fails
    • How to Calculate TRIFR, LTIFR and Other Health and Safety Indicators
    • Download Safety Moments from Human Resources Secretariat
  • Social Psychology Of Risk
    • What is Psychological Health and Safety at Work?
    • Safety Psychology Terminology
    • Some Basics on Social Psychology & Risk
    • Understanding The Social Psychology of Risk – Prof Karl E. Weick
    • The Psychology of Leadership in Risk
    • Conducting a Psychology and Culture Safety Walk
    • The Psychology of Conversion – 20 Tips to get Started
    • Understanding The Social Psychology of Risk And Safety
    • Psychology and safety
    • The Psychology of Safety
    • Hot Toaster
    • TALKING RISK VIDEOS
    • WHAT IS SAFETY
    • THE HOT TOASTER
    • THE ZERO HARM DEBATE
    • SEMIOTICS
    • LEADERSHIP
  • Dr Long Posts
    • ALL POSTS
    • Learning Styles Matter
    • There is no Hierarchy of Controls
    • Scaffolding, Readiness and ZPD in Learning
    • What Can Safety Learn From Playschool?
    • Presentation Tips for Safety People
    • Dialogue Do’s and Don’ts
    • It’s Only a Symbol
    • Ten Cautions About Safety Checklists
    • Zero is Unethical
    • First Report on Zero Survey
    • There is No Objectivity, Deal With it!
  • THEMES
    • Psychosocial Safety
    • Resiliencing
    • Risk Myths
    • Safety Myths
    • Safety Culture Silences
    • Safety Culture
    • Psychological Health and Safety
    • Zero Harm
    • Due Diligence
  • Free Learning
    • Introduction to SPoR – Free
    • FREE RISK and SAFETY EBOOKS
    • FREE ebook – Guidance for the beginning OHS professional
    • Free EBook – Effective Safety Management Systems
    • Free EBook – Lessons I Have Learnt
  • Psychosocial Safety
    • What is Psychosocial Safety
    • Psychological Safety
      • What is Psychological Health and Safety at Work?
      • Managing psychosocial hazards at work
      • Psychological Safety – has it become the next Maslow’s hammer?
      • What is Psychosocial Safety
      • Psychological Safety Slogans and Quotes
      • What is Psychological Safety?
      • Understanding Psychological Terminology
      • Psycho-Social and Socio-Psychological, What’s the Difference?
      • Build a Psychologically Safe Workplace by Taking Risks and Analysing Failures
      • It’s not weird – it’s a psychological safety initiative!
You are here: Home / ALARP / Risk Homeostasis Theory–Why Safety Initiatives Go Wrong

Risk Homeostasis Theory–Why Safety Initiatives Go Wrong

July 28, 2022 by Admin 9 Comments

My favourite quote is in relation to racing drivers on normal roads and why they have more crashes and get more fines: “At their level of skill, driving like an average driver may be intolerably boring. Imagine being a master of Beethoven and all you are allowed to play is “Twinkle, twinkle, little star”! (Wilde, 2014, p79)”

Anyone interested in learning more about RHT can download a free copy of Wilde’s book “Target Risk 3” here: Target Risk 3 Free Download

Q: “How does the theory of risk homeostasis explain why some safety and risk initiatives go wrong?”

Risk-Homestasis_thumbIn the risk and safety industry, it is common to try to isolate, focus on and solve a particular problem or risk with a specific program or initiative.[1] [2] When the concept of Risk Homeostasis Theory is not considered in the development and implementation of such initiatives, then they may not always work out as planned or expected due to the subjective risk perceptions, unconscious decisions, biases and by-products associated with Risk Homeostasis Theory (Wilde, 1982).

Risk Homeostasis Theory (RHT) was initially proposed by Wilde in 1982[3]. Risk Homeostasis Theory proposes that, for any activity, people accept a particular level of subjectively evaluated risk to their health and safety in order to gain from a range of benefits associated with that activity (Wilde, 2014, p11). Wilde refers to this level of accepted risk as “target level of risk” (Wilde, 2014, p31). If people subjectively perceive that the level of risk is less than acceptable then they modify their behaviour to increase their exposure to risk. Conversely, if they perceive a higher than acceptable risk they will compensate by exercising greater caution. Therefore people don’t always respond as expected to traditional safety initiatives but rather adjust their response to more rules, administrative controls, new procedures and engineering technologies according to their own target level of risk.

imageThe Oxford Dictionary defines “initiative” as: “An act or strategy intended to resolve a difficulty or improve a situation; a fresh approach to something”[4]. This definition of initiative may, at first, seem to fit well in the context of orthodox safety and risk. Safety and risk problems may be identified as a result of a risk assessment or root cause analysis. Initiatives are then implemented to address the specific identified issue. What is not always appreciated is that, due to the interdependency of various systems and their components upon each other, initiatives may correct one problem but displace the problem elsewhere in another unanticipated form (Hancock and Holt, 2003). “These problems of interdependency can be referred to as messes” (Hancock and Holt, 2003). Risk Homeostasis Theory recognises the by-products and trade offs associated with implementing safety and risk initiatives in this messy environment:

Reduction in one particular immediate accident cause may make room for another to become more prominent! (Wilde, 2014, p93).

The same concept is recognised by Long:

It’s difficult for some to conceive that a desired procedure, policy or behaviour can actually generate the very opposite of what is desired (Long and Long, 2012, p. 33)

In defining “go wrong” the Oxford Dictionary is not so concise. Its definition includes synonyms such as “mistake”, “failure”, “error” and “incorrect”[5]. These may be words also commonly used in orthodox risk and safety. But binary oppositional thinking,[6] for example: ‘right or wrong’, black or white’, is not useful in this context. Risk Homeostasis Theory considers the uncertainties arising from human behaviour and how people may react to safety and risk initiatives. These uncertainties in perception and judgement create a “wicked problem” (Hancock and Holt, 2003). When messes and wicked problems are combined (a wicked mess) there is no correct answer or perfect solution, so safety and risk initiatives must “look less to solve problems than to resolve tensions and realise satisfactory outcomes” (Hancock and Holt, 2003). There will always be limited time, money, resources and energy that will end the problem solving process before a perfect solution can be found. Accepting this is understood as “satisficing”. (Hancock and Holt, 2003) Therefore safety and risk initiatives cannot simply go right or wrong, there will always be trade-offs and by-products. Risk Homeostasis Theory explains not why initiatives ‘go wrong’ but why these initiatives don’t always ‘go to plan’ or achieve the expected results.

How does the safety and risk industry currently explain why initiatives do not go to plan? Whilst this inevitability generally acknowledged in current legislation as the requirement for the residual risk to be “as low as reasonably practicable” (ALARP)[7], this does not effectively explain how or why these initiatives do not go to plan. For many, the ineffectiveness of improvement initiatives is certainly counter intuitive (Long and Long, 2012, p. 33), but can still be explained logically, mechanistically and simplistically. Proponents of the unsafe acts or conditions premise may surmise more or better engineering controls should have been implemented, more resources are required or that it’s due to human error and more training is needed. According to the OHS Body of Knowledge (BoK):

Contemporary theory and research suggests that the failures that lead to incidents can be attributed to a combination of factors such as human error, inadequate design, poor maintenance, degradation of working practices, inadequate training, poor supervision and excessive working hours, which in turn are influenced by organizational and management culture. (Ruschena 2012)

Current safety legislation[8], traditional textbooks[9], training courses[10] and government authorities[11] espouse that the accepted method of eliminating or minimising hazards and risk is to consult the “hierarchy of hazard control”, where controls are simplistically ranked from most effective and reliable to the least effective and reliable. When initiatives go wrong then the premise is that one must simply return to the list and choose a more effective hazard control or perhaps a combination of controls. Ruschena (Ruschena, 2012) provides some insight into the complexities of hazard control and states that:

Approaches to control need to move beyond a simplistic application of the hierarchy of control to consider strategies required in the pre-conditions, occurrence, and consequence phases. (Ruschena, 2012)

Ruschena also suggests that failure causation is complex and better strategies should be formulated from a knowledge of barriers and defences or sociotechnical systems such as the “Swiss Cheese” model. Whilst Ruschena briefly acknowledges the need to have: “an understanding of the psychological principles that explain behaviour of workers and individuals and in groups” (Ruschena, 2012), this is still not a useful explanation of why safety and risk initiatives may not go to plan. According to Long:

If we want to make sense of risk, then we have to understand the psychological drivers which attract people to it. If we want to make sense of risk we have to understand human decision making beyond simplistically blaming stupidity, human factors or human error” (Long and Long, 2012)

The ineffectiveness of safety and risk controls may extend beyond specific hazards or individual issues and is not always immediately obvious. With the traditional narrow focus of many safety initiatives resulting from say controlling specific hazards and risks or root cause analysis of incidents, there are trade-offs and by-products for what may seem like short term success of any simplistic initiatives implemented. Long refers to these as “Band-aid Solutions” (Long, 2014, p. 103).

It is interesting that when we try to solve problems simplistically and in isolation, we create by-products that are either shifted or hidden till later……….On the surface it appears that the problem in solved yet, under the surface, a whole new problem has emerged.(Long, 2014, p. 103).

This transfer of risk and by-product effect was clearly demonstrated following the ‘September 11’[12] incident when, due to the new subjective perception of flying being an unacceptable risk, many decided to drive between cities. It was estimated that an additional 1,595 Americans died in car accidents in the year after the attacks.[13] Safety initiatives can also have a longer term or latent affect. Stemming from post September 11 cockpit controls, in the case of the recent crash of flight 9525, the security initiatives appear to have stopped the senior pilot on the plane from regaining control of the Airbus A320.[14] In explaining this concept further, Wilde refers to this as “the delta illusion” (Wilde, 2014, p12), drawing the analogy of a river delta with three delta channels. If two deltas are dammed (ie. prohibition and risk aversion) then flow is not reduced by two thirds, rather the third delta widens or another opens up. The premise being that we cannot stem the flow of water (or reduce the amount of risk) by “front end” and traditional safety controls, rather by stemming the flow further upstream. In the case of safety, interventions should be aimed at increasing desire to be safe as opposed to erecting barriers and forcing behaviour modifications as is the traditional approach. (Wilde, 2014, p12). An example of this is closing a road to traffic in order to effectively accident rate to zero. But this simply transfers cars and accidents to other roads. (Wilde, 2014, p13).

Many who espouse or are involved in the design and implementation of engineering controls often experience this “delta illusion”. Engineering controls are considered, besides elimination, to be the most effective forms of hazard control[15]. However, in attempt to reduce the opportunity for any human error or intervention, these safety and risk initiatives often do not work as expected or planned, the reasons being clearly explained by Risk Homeostasis Theory. A major component of Wilde’s studies, completed in the development of the Risk Homeostasis Theory, involved a study of the effectiveness of engineering controls. Risk Homeostasis Theory essentially evolved out of a 4 year study involving a taxi cab fleet in Munich. (Wilde, 2014, p93). Half of the cabs in the fleet were fitted with anti-lock brake systems (ABS) which prevent wheels from locking up and improve steering under hard braking and deceleration conditions. Rather than this initiative have a positive effect on accident rates, the cabs fitted with ABS brakes experienced a slightly higher accident rate. It was found that, in response to the installation of ABS brakes, drivers perceived less of an accident risk due to having more vehicle control and mechanical devices that would protect them. They reacted by accelerating faster, braking later and driving faster around corners. These findings were supported in other trials held in Canada and Norway. (Wilde, 2014, p95).

Malnaca (Malnaca, 1990), in relation to traffic safety, explains that although the fundamental government premise is that crash incidence and severity can be reduced by better road design, more safety devices, education and enforcement, Risk Homeostasis Theory proves that this premise is not entirely correct and that these initiatives lead to greater risk taking. Malnaca, in summarising the findings of the OECD in 1990, states that:

Engineering can provide an improved opportunity to be safe, education can enhance the performance skills, and enforcement can of rules against specific unsafe acts may be able to discourage people from engaging in these particular acts, but none of these interventions actually increase the desire to be safe….. The greater opportunity for safety and the increased level of skill may not be utilized for greater safety, but for more advanced performance. (Malnaca, 1990)

Another popular, yet not always effective, safety and risk initiative is training. Wilde refers to training as “intervention by education” (Wilde, 2014, p73).

It is not incompatible with RHT to propose that training could be used as a means for safety promotion, although its affects will necessarily be limited and the past record is not encouraging”. (Wilde, 2014, p73)

According to Wilde, if a person undertakes training in a new skill then initially they may exhibit an over-confidence, or ‘Hubris’,[16] in their in their ability or an underestimation of the inherent risks due to over estimating their own ability to identify and manage those risks. (Wilde, 2014, p78). Wilde quotes a 1993 study of truck drivers in Norway who attended mandatory slippery road training courses (Wilde, 2014, p. 78). Investigators found that the accident rate unexpectedly increased as a consequence and attributed this to the course not only contributing to driver ability but also to increased driver confidence, with the net effect being more accidents.

Risk Homeostasis Theory also explains risk and safety initiatives do not always turn out as expected when they involve already skilled and experienced people, particularly when those people find themselves in an over-controlled workplace. Wilde (Wilde, 2014, p79) presents the results of a US study, undertaken by Williams and O’Neill, into the on-road driving records of licensed race car drivers. They were found to be less safe than average drivers both per km driven and per person. They had more accidents and more speeding fines. Given that they took up this occupation in the first place, their target risk is likely to be greater than average but also,

At their level of skill, driving like an average driver may be intolerably boring. Imagine being a master of Beethoven and all you are allowed to play is “Twinkle, twinkle, little star”! (Wilde, 2014, p79)

A workplace example of this would be an experienced Tradesman being required to sit through a typical “sleeping bag” (Long and Long, 2012, p. 70) safety induction. Being adorned in excessive PPE and constricted, as to how he must perform the task and apply his skills. This constriction may be by safety initiatives such as 20 page generic safe work method statements (Long, 2014, p31) and subsequent safety observations and audits by those not at all familiar with his trade. The Tradesman would find this situation “intolerably boring” (Wilde, 2014, p. 79) and thus automatically fall back on “tick and flick”, (Long, 2014, p31) his own heuristics and biases developed over many years ‘on the job’ and his own subjective perception of the risks. According to Risk Homeostasis Theory, these safety initiatives do not work as planned because they do not alter the subjective perception of risk. They do not allow the Tradesman to properly discern the actual risks, alter his perceptions, address his overconfidence in his existing skill set nor provide any incentive to behave in a safer way. The Tradesman may be “lulled into an illusion of safety” (Wilde, 2014, p. 79).

Risk Homeostasis Theory clearly explains why safety and risk initiatives do not work as planned. These initiatives traditionally focus on rational thinking and mechanistic or administrative hazard and behavioral controls (or unsafe acts and conditions). Some safety and risk initiatives fail to account for the, mostly, unconscious way that humans make judgements and decisions or the way they formulate their individual subjective perceptions of risk and subsequent behaviour. Safety and risk initiatives can be counter-productive because:

People will alter their behaviour in response to the implementation of health and safety measures, but the riskiness of the way they behave will not change, unless those measures are capable of motivating people to alter the amount of risk they are willing to incur.” (Wilde, 2014, p12)

Risk Homeostasis Theory is counter intuitive to the traditional approach to health and safety (Long and Long, 2012, p. 33) which believes that when initiatives don’t work as planned then we just need more or better controls or greater vigilance. Risk Homeostasis Theory proposes that rather than more controls, sometimes less control and more motivation may be more effective. When our subjective perception of risk is greater and we are able to make our own decisions on reducing it to a level that is acceptable (target risk) then people will behave and adapt accordingly. This explains the success of initiatives such as shared pedestrian/vehicle zones.[17] Similarly, “Fun Theory” also shows that people can be more motivated to be safe through positive gains and fun[18] rather than by being “flooded with so much bureaucracy and administration” (Long, 2014, p31).

Bibliography

Hancock, D. a. (2003). Tame, Messy and Wicked Problems in Risk Management. Manchester: Manchester Metropolitan University Business School Working Paper Series (Online).

Long, R. & Long, J. (2012). Risk Makes Sense: Human Judgement and Risk. Kambah ACT: Scotoma Press.

Long, R. (2014). Real Risk – Human Discerning and Risk. Kambah ACT: Scotoma Press.

Malnaca, K. (1990). Risk Homeostasis Theory in Traffic Safety. 21st ICTCT Workshop Proceedings, Session IV 1 – 6.

Ruschena, L. (2012). Control: Prevention and Intervention. In HaSPA (Health and Safety Professionals Alliance), The Core Body of Knowledge for Generalist OHS Professionals. Tallamarine, VIC: Safety Institute of Australia.

Wilde, G.J.S. (2014). Target Risk 3 – Risk Homeostatis in Everyday Life. Toronto: PDE Publications – Digital Edition.


[1] Hazardman – http://hazardman.act.gov.au/

[2] Dumb Ways to Die – http://dumbwaystodie.com/

[3] http://en.wikipedia.org/wiki/Risk_compensation

[4] http://www.oxforddictionaries.com/definition/english/initiative

[5] http://www.oxforddictionaries.com/definition/english/wrong

[6] https://safetyrisk.net/binary-opposites-and-safety-goal-strategy/

[7] WHS Act and Regulations – Section 18

[8] WHS Act and Regulations – Section 18

[9] http://www.healthandsafetyhandbook.com.au/how-the-hierarchy-of-control-can-help-you-fulfil-your-health-and-safety-duties/

[10] https://training.gov.au/Training/Details/DEF42312

[11] https://www.osha.gov/SLTC/etools/safetyhealth/comp3.html

[12] http://en.wikipedia.org/wiki/September_11_attacks

[13] http://www.theguardian.com/world/2011/sep/05/september-11-road-deaths

[14] http://www.theguardian.com/world/2015/mar/26/germanwings-crash-raises-questions-about-cockpit-security

[15] https://www.osha.gov/SLTC/etools/safetyhealth/comp3.html

[16] http://en.wikipedia.org/wiki/Hubris

[17] http://www.rms.nsw.gov.au/roadsafety/downloads/shared_zone_fact_sheet.pdf

[18] http://www.youtube.com/watch?v=iynzHWwJXaA

Please share our posts

  • Click to print (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)

Related

Filed Under: ALARP, Human Error, Psychological Health and Safety, Risk Homeostasis, Safety Procedures, Wicked Problems Tagged With: go wrong, initiatives, risk homeostasis, satisficing

Reader Interactions

Comments

  1. ALEXANDRE ROGERIO ROQUE says

    May 30, 2022 at 11:50 PM

    Analyzing the cognitive process makes sense, the brain does not like uncertainty, (This generates Cortisol and bad stress in the long term) it will seek to balance itself (Homeostasis). the problem is where that balance point will be, since this decision is usually unconscious. If it is a routine activity with high exposure to risk, over time he will seek this balance to eliminate Cortisol from the system and can do without evaluating the risk!

    Reply
  2. rob long says

    June 9, 2020 at 3:33 PM

    I find it interesting how many safety theories are made ‘fact’ too. How many safety icons, myths, symbols and theories are made fact when they too are mostly just theory at best?
    How funny that root cause, coloured matrices, hierarchies of control are made fact when they are not. … and here were are getting cautious about risk homeostasis when safety peddles more mumbo jumbo as fact than a witch doctor convention.

    Reply
    • Riskcurious says

      June 10, 2020 at 9:48 AM

      I think all theories and models should be understood in the context of both what they can offer and their limitations. I’m not sure that a “lesser of two evils” argument is particularly useful – if we don’t critically evaluate the theories, how are we even supposed to judge which are better? That seems to just open us up to judgements based purely on face validity, which is likely to just result in adopting the models as fact those exact models that have been used as examples of worse than RHT.

      Reply
  3. Riskcurious says

    June 9, 2020 at 10:21 AM

    I have to say, the analogy of “Twinkle Twinkle” made me laugh a little as I immediately thought of the outcome being Mozart’s variations on “Ah, vous dirai-je, Maman”… which is probably not the negative outcome I was supposed to conjure up (of course I am fully aware that Mozart was not forced to only play Twinkle Twinkle but I can only imagine the variations that might have resulted if he was. I am also not I suggesting that we should place such restrictions on people to see what their creativity can achieve). It just made me laugh.

    On a more serious note though, whilst I agree that compensatory behaviours needs to be considered when dealing with problems of risk control, I think we need to be a bit cautious of risk homeostasis theory for a few reasons. Firstly, the theory itself was that just, it was not claimed to be supported by sufficient empirical evidence (and since then, those arguing for the theory tend to cherry pick the evidence to support their views). Secondly, the theory would have to accommodate the context- and content-dependent nature of risk decisions, which even if it were technically correct, would pretty much render it unusable from a practical perspective. It also makes it near impossible to properly validate through research. Just to be clear, this isn’t saying that people don’t compensate for risk controls, just that the nature of the equation may not be one quite so clear cut as risk homeostasis theory tends to suggest.

    Reply
    • Admin says

      June 9, 2020 at 10:28 AM

      Of course it is just a theory and not a panacea – I would be happy if people just used the premise to initiate some critical thought and discussion around the potential byproducts/risk shifts of their controls rather than blind obeyance of the hierarcy and the warm, fuzzy feeling of ticking off another hazard

      Reply
      • Riskcurious says

        June 9, 2020 at 10:32 AM

        Ah yes, that is pretty much all I was saying. Most definitely think about potential for compensatory behaviours and other by-products, but don’t get too caught up in risk homeostasis as a literal thing and start trying to find “target risk” levels etc, as it probably won’t provide much additional benefit.

        Reply
        • Admin says

          June 9, 2020 at 10:40 AM

          Yeah agreed, using a term like “target risk” is certainly going to send crusaders and engineers off on wacky trajectory searching for and measuring a magic number ……. “target risk zero”

          Reply
  4. Jaime Lopez says

    January 23, 2020 at 5:18 AM

    Awesome reading. I thought I was the only one who thought this way. As a Environmental Scientist who also manages safety and health it seemed to me that the way the Theory of Chaos was explained in Jurassic Park (movie one) was what a safety program has to deal. Now I know this is actually called Risk Homeostasis. Now I have a name for it, and I know I’m not crazy. Thank You very much.

    Reply
    • Admin says

      January 25, 2020 at 7:53 AM

      Thanks Jaime – for many years I couldnt figure out why the harder we tried to control and improve safety the worse it became – I was likewise relieved and enlightened to discover the theory of Risk Homeostasis – double edged sword but, on one hand it gave it a name but also made me realise what a wicked problem it really is

      Reply

Do you have any thoughts? Please share them below Cancel reply

Primary Sidebar

Search and Discover More on this Site

Never miss a post - Subscribe via Email

Enter your email address and join other discerning risk and safety people who receive notifications of new posts by email

Join 7,516 other subscribers

Recent Comments

  • Andrew Floyd on Culture and Risk Workshop – Feedback
  • Leon Lindley on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Rob Long on Entertainment, Suckers and Making Money From Safety
  • Rob Long on Celebrating 60 Years of Lifeline
  • Gregg Ancel on Entertainment, Suckers and Making Money From Safety
  • Rob Sams on Celebrating 60 Years of Lifeline
  • Rob long on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Rob Long on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Rob Long on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Rob Long on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Admin on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Leon Lindley on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Admin on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Mariaa Sussan on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Brian Darlington on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Leon Lindley on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Narelle Stoll on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Narelle Stoll on Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Brian Edwin Darlington on SPoR Workshops Vienna 26-30 June
  • Rob Long on How to Manage Psychosocial Risks in your organisation

RECOMMENDED READING

viral post – iso 45003 and what it cannot do

Introduction to SPOR – FREE!!

Psychosocial Safety and Mental Health Series

Celebrating 60 Years of Lifeline

Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness

Duty of Care is NOT Duty to Care (for persons)

Safety, Ethics, SPoR and How to Foster the Abuse of Power

Psychosocial Spin – Naming Bad as Good, Good Work Safety!

How to Manage Psychosocial Risks in your organisation

The Delusions of AI, Risk and Safety

Health, the Poor Cousin of Safety

Psychosocial Health Conversations – Three

Conversations About Psychosocial Risk – Greg Smith, Dr Craig Ashhurst and Dr Rob Long

More Posts from this Category

NEW! Free Download

Please take our 2 minute zero survey

FREE eBOOK DOWNLOADS

Footer

VIRAL POST – The Risk Matrix Myth

Top Posts & Pages. Sad that most are so dumb but this is what safety luves

  • 500 OF THE BEST AND WORST WORKPLACE HEALTH and SAFETY SLOGANS 2023
  • Free Safety Moments and Toolbox Talk Examples, Tips and Resources
  • CATCHY and FUNNY SAFETY SLOGANS FOR THE WORKPLACE
  • Road Safety Slogans 2023
  • Culture and Risk Workshop - Feedback
  • 15 Safety Precautions When Working With Electricity
  • How to Calculate TRIFR, LTIFR and Other Health and Safety Indicators
  • Safety Acronyms
  • FREE RISK ASSESSMENT FORMS, CHECKISTS, REGISTERS, TEMPLATES and APPS
  • CLASSIC, FAMOUS and INFAMOUS SAFETY QUOTES

Recent Posts

  • Culture and Risk Workshop – Feedback
  • The Myth of Certainty and Prediction in Risk
  • Practical Case Studies in SPoR Presented at Vienna Workshops
  • Risk iCue Video
  • Rethinking Leadership in Risk
  • ‘Can’t Means Won’t Try’ – The Challenge of Being Challenged
  • Gesture and Symbol in Safety, the Force of Culture
  • Human Factors is Never About Humans
  • Celebrating 60 Years of Lifeline
  • Smart Phone Addiction, FOMO and Safety at Work
  • Entertainment, Suckers and Making Money From Safety
  • Breaking the Safety Code
  • The Futility of the Centralised Safety Management System?
  • Liking and Not Liking in Safety, A Tale of In-Group and Out-Groupness
  • Risk iCue Video Two – Demonstration
  • Radical Uncertainty
  • The Safety Love Affair with AI
  • Safety is not a Person, Safety as an Archetype
  • Duty of Care is NOT Duty to Care (for persons)
  • What Can ‘Safety’ Learn From a Rock?
  • Safety, Ethics, SPoR and How to Foster the Abuse of Power
  • Psychosocial Spin – Naming Bad as Good, Good Work Safety!
  • SPoR Workshops Vienna 26-30 June
  • What Theory of Learning is Embedded in Your Investigation Methodology?
  • How to Manage Psychosocial Risks in your organisation
  • Risk You Can Eat
  • Triarachic Thinking in SPoR
  • CLLR NEWSLETTER–March 2023
  • Hoarding as a Psychosis Against Uncertainty
  • The Delusions of AI, Risk and Safety
  • Health, the Poor Cousin of Safety
  • Safety in The Land of Norom from the Book of Nil
  • Psychosocial Health Conversations – Three
  • Conversations About Psychosocial Risk – Greg Smith, Dr Craig Ashhurst and Dr Rob Long
  • Jingoism is NOT Culture, but it is for Safety
  • CLLR Special Edition Newsletter – Giveaways Update
  • The Disembodied Human and Persons in Safety
  • 200,000 SPoR Book Downloads
  • What SPoR Network is.
  • Trinket Safety
  • How to Know if Safety ‘Works’
  • Due Diligence is NOT Quantitative
  • SPoR Community Network
  • Conversations About Psychosocial Risk Session 2 – Greg Smith, Dr Craig Ashhurst and Dr Rob Long
  • The Psychology of Blaming in Safety
  • By What Measure? Safety?
  • Safe Work Australia a Vision for No Vision
  • Do we Need a Different Way of Being in Safety?
  • Non Common Sense Mythology
  • Language Shapes Culture in Risk

VIRAL POST!!! HOW TO QUIT THE SAFETY INDUSTRY

FEATURED POSTS

Semiotics and Unconscious Communication in Safety

Breaking the Safety Code

What Can Safety Learn From Desire Paths?

STEM Safety in Drag

Why Resilience Cannot be Engineered

Question for the Safety Thinkers

No Moral Compass in Zero

Flooding is Dangerous, and I don’t Mean the Water….

Impacts of Cognitive Dissonance in the Workplace

The Idealization of Humans and The Zero Delusion

The Stanford Experiment and The Social Psychology of Risk

The Deficit Focus and Safety Balance

The Will To Be and Do

How Semiotics Affects The Return To Work Process

Are You a Safety Fool?

Safety Cries Wolf!

Dialogue Do’s and Don’ts

The Safety Worldview and the Worldview of Safety, Testing Due Diligence

Safety Entitlement and Compulsory Safety Mis-Education

Bridging the Disciplines for Better Outcomes

Nothing is Learned Through Brutalism

What Are the Benefits Of Social Psychology of Risk?

A Masters Degree in ‘Tick and Flick’

Why Have Some Freedom in Safety When a Dose of Fear and Guilt Will Do?

The Fallible Factor and What to Do About It

Free Download – Real Risk – New Book by Dr Robert Long

Social Sensemaking Available Now PLUS Free Share and Giveaway

Mental Health, Risk and Safety – Part 2

Language Shapes Culture in Risk

The Curse of Cognitivism

The Less You See, the More Likely to Die

Test Your Reaction Times

I Just Want Clear Answers

Building resilience trumps the prevention of harm

Adversarialism and the Politicisation of Safety

Right and Wrong in Safety

Free Online Introduction to the Social Psychology of Risk

The Paradox of Positivism for Safety

Snap, Crackle, Pop. That’s the Sound we Love to Hear

Why is fallibility so challenging in the workplace?

More Posts from this Category

Subscribe to Blog via Email

Enter your email address and join other discerning risk and safety people who receive notifications of new posts by email

Join 7,516 other subscribers

How we pay for the high cost of running of this site – try it for free on your site

WHAT IS PSYCHOLOGICAL SAFETY?

What is Psychological Safety at Work?


WHAT IS PSYCHOSOCIAL SAFETY