The New Safety Saviour – Algorithms
In the dreams of denial of life in Safety Utopia , we wait for the next scheme or propaganda to hit the safety airwaves. In the world of safety reality such waiting is like free entertainment, waiting to see what dumb down group buys the next bottle of snake oil. It doesn’t take too long in waiting before a new saviour is marketed, because Safety love saviours, because safety is about saving lives, it’s a religious activity.
Well, the next phase of safety salvation is here, algorithms. We have now discovered that the real reason people get injured is a lack of data. If only we had enough data analytics we could ‘fix’ cultures and build safer workplaces. This is the Utopia (https://www.safetyrisk.net/safety-utopia-as-abuse/) of predictive analytics and algorithmic promises of ‘machine learning’ (sic).
In an industry racing at break neck speeds to ‘dumb down’, any promise of Utopia will sell Safety Saviours to the congregation seeking zero harm.
So, in the interests of critique of the new saviour of algorithms, I have decided on a pre-release from a section of my new book (book 6 in the series on risk, Tackling Risk, A Field Guide to Risk and Learning) due out in the second half of 2017.
The Non-Sense of ‘Machine Learning’ and Saviour Algorithms
One of the greatest delusions in learning is the popular notion of ‘machine learning’. The term ‘machine learning’ refers to the automated detection of meaningful outcomes and regeneration of further meaningful outcomes by set algorithms. The following are cited for context:
In the past couple of decades it has become a common tool in almost any task that requires information extraction from large data sets. We are surrounded by a machine learning based technology: search engines learn how to bring us the best results (while placing profitable ads), anti-spam software learns to filter our email messages, and credit card transactions are secured by a software that learns how to detect frauds. Digital cameras learn to detect faces and intelligent personal assistance applications on smart-phones learn to recognize voice commands. Cars are equipped with accident prevention systems that are built using machine learning algorithms. Machine learning is also widely used in scientific applications such as bioinformatics, medicine, and astronomy. Shai, Shalev-Schwatz and Shai, Ben-David (cited April 2017 http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf )
As regards machines, we might say, very broadly, that a machine learns whenever it changes its structure, program, or data (based on its inputs or in response to external information) in such a manner that its expected future performance improves. Some of these changes, such as the addition of a record to a data base, fall comfortably within the province of other disciplines and are not necessarily better understood for being called learning. But, for example, when the performance of a speech-recognition machine improves after hearing several samples of a person’s speech, we feel quite justified in that case to say that the machine has learned. Nillsson, N., (cited April 2017 http://ai.stanford.edu/~nilsson/MLBOOK.pdf )
Pattern recognition has its origins in engineering, whereas machine learning grew out of computer science. However, these activities can be viewed as two facets of the same field, and together they have undergone substantial development over the past ten years. In particular, Bayesian methods have grown from a specialist niche to become mainstream, while graphical models have emerged as a general framework for describing and applying probabilistic models. Also, the practical applicability of Bayesian methods has been greatly enhanced through the development of a range of approximate inference algorithms such as variational Bayes and expectation propagation. Similarly, new models based on kernels have had significant impact on both algorithms and applications. Bishop (cited April 2017 http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf )
It’s easy to see how mechanistic disciplines (eg. engineering and computer science) who know so little about learning, education and metaphysics, could come up with such a non-sense discourse. The discourse just assumes an anthropology for a machine??? The discourse assumes that learning is about data transference and replication???
What this non-sense teaches us is; just because someone sprouts certain language – doesn’t make it so. (We could also call this ‘Trumpism’)
Let’s look at some serious problems associated with this discourse.
- The idea that learning is about data is so removed from the real meaning of learning that it makes such language meaningless. According to this definition, anything can ‘learn’.
-
There is no reference to subjects but only objects in this theory of ‘machine learning’.
-
The idea that a machine can have ‘life’ and be ‘alive’ is also a nonsense. No machine can be considered metaphysically. Machines not only don’t have a soul/spirit, there is no way they can be referred to as being ‘conscious’.
-
Machines don’t have an unconscious, and cannot be self-conscious. How does a machine dream? How does it ‘get an idea’? How does a machine ‘daydream’? How does a machine pray? How can a machine meditate? How does a machine create? Innovate? When a machine ‘switches off’ what does its ‘mind’ do? How does a machine imagine? How does a machine formulate a metaphor?
-
Machines cannot have a conscience or sense moral necessity in and of themselves. How does a machine experience confusion and paradox? How can an object ‘believe’? In what sense can a machine be a person? How can a machine express faith?
-
Machines have no sense of social identity or any sense of meangfulness to the notion of family or group. On what basis does a machine choose between competeing moral values? Some of the latest research shows clearly that artificial intelligence (AI) cannot ‘cooperate, collaborate or even ‘think’ in such a way (https://www.weforum.org/agenda/2017/02/ai-learned-to-betray-others-heres-why-thats-okay?utm_content=buffer2d2c2&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer ). Indeed, when given a comparative task AI becomes more aggressive. A classic quote from the research is: ‘We are fascinated by ‘machine learning’; but in the end, the machines only learn what we tell them to learn.’
-
The idea that some thing ‘artificial’ (eg. artificial intelligence) can be made non-artificial (human) is also a non-sense. I wonder how a machine defines ‘trust’? How does it heal itself when it gets a virus? What is it’s immune system? How does it sexually reproduce? How does ‘it’ understand the ‘miracle’ of birth? How does it ‘know’ that the heart is not just a pump? How does a machine die and grieve for the loss of another machine?
-
Despite the attributions from this discourse that machines display personhood, such anthropmorphic attribution is simply non-sense. (see: At what point should an intelligent machine be considered a ‘person’? World Economic Forum cited April 2017 https://www.weforum.org/agenda/2017/02/at-what-point-should-an-intelligent-machine-be-considered-a-person?utm_content=bufferf48cf&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer ). How can a machine ‘feel’? How can a machine be irrational and arational? How can a machine ‘love’?
-
Since when did a capacity to process data become a ‘mind’. How can data transfer and data replication properly be labelled ‘thinking’ or ‘learning’? How can a machine create, innovate, sing, invent, write poetry, self-generate art, belong, meditate, hope, cry, have faith, trust and possess countless metaphysical qualities.
-
There is no ‘learning’ that can be attributed to an object. Humans are much more than the sum of shifting data and change. Learning without an anthropology of personhood cannot be learning.
The ideology of mechanics and perfectionism is hidden in this discourse on machine learning. One can only believe in machine learning as an act of faith, something a machine cannot have.
So, we see, there is no machine ‘learning’ because machines don’t have unique personhood. Without an anthropology of learning and capability of personhood there is no ‘learning’. The replication of algorithms is just that, replication of algorithms. Anyway, we saw in 2017 how the ideology and discourse of ‘machine learning’ and the utopia of algorithms went pear shaped with the ‘robo-debt debacle’.
The Centrelink Debacle
At the start of 2017 the Department of Human Services and Centrelink in Australia went into meltdown. The naive and erroneous faith in ‘big data’ met it’s match with reality. The disaster is now known as the ‘robo-debt’ debacle. (see further Sydney Morning Herald cited Aril 2017 http://www.smh.com.au/federal-politics/political-opinion/how-the-centrelink-debt-debacle-failure-rate-is-much-worse-than-we-all-thought-20170124-gtxh8q.html ).
The robo-debt debacle illustrates how machines cannot ‘learn’ and expose the delusions of ‘big data ideology’ and ‘machine learning’.
What had happened was that Centrelink tried to create a set of algorithms with ‘big data’ in order to ‘catch out’ people who had been overpaid by the Centrelink and Government systems. It was to do so by matching Centrelink data with other data sets from banking, social traffic and collections of Government data (eg, taxation etc). Unfortunately on the recieving end of this debacle were human beings. Human beings who live in a random world making fallible decisions in unpredictable ways in unpredictable circumstance. Machines cannot think or learn like humans.
The consequence of this ideological disaster was up to 90% of Centrelink clients being forced to repay debt they didn’t owe. People were distressed and suicidal over this debacle because data matching was machine determined and the debt that was owed was determined by pattern matching. Ten of thousands were sent letters demanding payment, humans on the other hand, do much more than match computer generated patterns.
On the basis of this ideology the Department of ‘In-Human’ Services staff were instructed to not correct errors despite knowing the debt notices were wrong (cited April 2017 https://independentaustralia.net/politics/politics-display/the-centrelink-robo-debt-debacle-has-only-just-begun,9951 ).
Unfortunately, Centrelink have ‘learned’ how to dehumanise a long time ago. It has been common practice to sell debt or pass debts on to a debt collecting agency so that Centrelink cannot be approached (now a third party) to consult about the problem In addition, people know that a visit to Centrelink to talk about concerns is a waste of time with waiting up to 3 hours or more for service and phone service even worse. Naive politicians, with no idea about Centrelink culture or welfare climate simply exacerbated the problem by telling the public (on TV) to just call up Centrelink and sort out the problem.
This follows on from the Census Debacle of 2016 ( cited April 2017 http://www.theaustralian.com.au/national-affairs/census-debacle-senate-inquiry-into-what-went-wrong/news-story/6de67fdcb7c74fc7878586420671fd0a ).
Meanwhile in the risk and safety world the same ideology shows up in the nonsense of ‘Predictive Analytics’ and ‘Saviour Algorithms’. Predictive Analytics and saviour Algorithms promise that future events can be predicted on the basis of ‘big data’. In risk and safety, it is the latest ideology in the quest to control humans and seek perfection and zero in everything. You can read more about this new faith and ideology here: http://www.predictivesolutions.com/lp/making-case-predictive-analytics-workplace-safety/. Just look at the language and you will see that this analytics can forecast and prevent injuries. Never mind that data is interpreted and subjective, just promise Utopia and line up the sales team.
Omniscience (knowing all) is the only trajectory if the ideology is zero harm. When searching for zero, only god will do. The same safety ideology seeks to also be omnipresent through cameras placed at all places on site (the new Big Brother).
All of this non-sensical and delusional mis-representation about learning and, faith in machines (technique see Ellul https://monoskop.org/images/5/55/Ellul_Jacques_The_Technological_Society.pdf ) simply illustrates the vacuum in expertise about learning in the mechanistic disciplines like risk and safety. Unfortunately, all of this ideology in predictive analytics, big data, technique, algorithms and The Love of Zero; stands in opposition to three fundamental realities:
· The world is random.
· Humans are fallible.
· The future is unknown.
Any denial of these basic realities of human living can only be described as ‘religious faith’. The discourse of predictive analytics and saviour algorithms has more in common with soothsayers, crystal balls, alchemy and speaks the language of voodoo, more than anything that makes sense as ‘human’ or of this world. We need to stop focussing on the cognition of content, dreams for Utopia and start focussing on personhood-as-learning, if we want to know anything about learning-about-learning and tackling risk.
Do you have any thoughts? Please share them below