This is because one of the best ways to learn is through feeling the results of doing. There’s nothing like disappointment and failure to teach a lesson. Similarly, there’s nothing like reward and success to encourage and motivate you to develop and grow.
These are the experiences that help humans learn and grow. We learn from what moves us. There is no learning without movement. There is no learning without e-motion.
Data in and data out is NOT learning, whether you do it individually or in teams.
Education is not about a shift in content or knowledge, the purpose of Education is Learning.
This is why pushing data into a receptacle and extracting data is not learning. It’s not even knowledge. Regurgitating data is not learning. This is also why most inductions in safety don’t work. Indeed, sitting in a room being blasted with 300 slides of data, the drone of being lectured and flooded with information, is a complete waste of time. Worse, the experience teaches attendees about anti-learning.
And when things become tedious and boring, it’s no better to then turn to AI.
As much as we might like to think that AI ‘helps’ us get things done, there is a trade-off. That trade-off is ‘learning’. Yes, it might save work, time and even the pain and struggle of work, but these too are essential for learning.
The same exists when people think that getting a risk assessment done by copy and paste is good.
All any of this confirms, is that Safety thinks the outcome is the purpose of safety. All is states, is that producing paperwork or the completion of a checklist is what safety is.
What we lose in copy and paste and using AI, is the stress, challenge and experience of learning. Learning is all about the process, NOT the outcome.
I’m no luddite. I’m typing this blog using a computer. But, it’s just a tool. I still have to engage in critical thinking in the construction of an argument. I still want to do the work. I still want to learn.
So, as much as people encourage others to use AI in getting things done, think of all that is traded off in the process.
And, if something goes wrong, AI will never be held to account, it will be the PCBU who handed over responsibility of risk to IT. If you ever get to court the excuse that ‘AI did it’ won’t wash.
More to come on this in a podcast soon with: Dr Ashhurst, Dr Anand, Greg Smith and Myself in a series called ‘How Safe is AI?’
Do you have any thoughts? Please share them below