I find it so amusing when Safety triggers off its name calling from its base of ignorance. The latest has been numerous calls of ‘pseudo psychology’ from a base of no study or understanding of psychology. Even more so, no idea of social psychology either in theory or practice.
Such name calling enables a wonderful base for ignorance and fostering the mono-disciplinary view of Safety. This is the view that whenever it wants to know anything seeks an answer from within its own discipline. This is how those such as Hopkins, Cooper and Busch seek to understand culture. No wonder they all find an understanding of culture ‘cloudy’ and confusing. Culture cannot be understood either propositionally or conceptually.
Unfortunately, Safety frames the world of culture as if it has no people in it. And in this world persons are objects and hazards in a ‘system’.
The key to understanding how fallible persons tackle risk in culture is to step outside of the safety discipline and undertake a Transdisciplinary view of persons, ethics, learning and politics.
Persons don’t serve systems, systems serve persons.
When we step outside of a mono-disciplinary Safety view of risk we begin to take on the realities of subjective experience, embodied learning, human judgement and decision making, Socialitie, unconscious, Semiosphere and collective unconscious.
None of this is ‘pseudo’ anything but rather could help Safety escape ignorance and isolation.
In a similar way does safety call out ‘pseudo law’, ‘pseudo politics’, ‘pseudo ethics’ or ‘pseudo ergonomics’, no I don’t think so. Then why is Safety so threatened by a discussion of the positivity, practicality and constructive value Social Psychology can offer in tackling risk?
Strangely, this last call of ‘pseudo psychology’ came from a marketing Safety software perspective.
Of course, computer software is indifferent to people and is at best a tool that embeds design that is political, ethical and economic. Only the designers of such software imagine that design is objective, when it is the opposite.
Projecting that software safety solutions are somehow magically ethical, objective and independent of people simply peddles the lie that safety software is a safety ‘wonder drug’. A good is example can be viewed here:
The name says it all. I am not your protector and software offer no hope of protection from risk. It is not my job to ‘over-ride’ (https://safetyrisk.net/safety-gives-me-the-right-to-over-ride-your-rite/) you or save you. Humans are not hazards to be ‘saved’ (https://safetyrisk.net/how-to-be-a-safety-extremist/). The job of Safety is to advise and facilitate ownership of risk, not to unethically overpower others according to what is deemed a risk.
Software doesn’t ‘protect’ it just receives data that humans ‘use’. Software can’t listen, converse, observe, facilitate ownership or help. Software has no ‘ability’ to humanise risk. Software behaves as it is designed and design engineers generally have little idea of ethics, politics or the bias they build into their software indeed, most think safety software is value neutral. The opposite is the case.
There is no salvation in the machine that goes ‘bing’ (https://www.youtube.com/watch?v=NcHdF1eHhgc ). How typical that such a view should resort to name calling ‘pseudo psychology’.
Of course, when things fall apart and people are hurt. When persons suffer and experience harm, do people call on software for help or for counsellors? We know the answer except apparently safety software doesn’t.
When people need help, care, understanding and practical, positive and constructive approaches to risk, they seek help in What Works (https://www.humandymensions.com/product/it-works-a-new-approach-to-risk-and-safety/).
Do you have any thoughts? Please share them below