Artificial Intelligence (AI) has been spruiked at the latest International Work Health and Safety Conference in Sydney Australia as the latest and greatest thing for Safety, yet again.
The first thing all users of ChatGPT, Jasper et al, should know is that they only regurgitate what is already known and that it has access to. This shows a bias, one from a Computer Engineering point of view, apart from the biases that are already used to write doctorates, marketing and extremist views etc. An anecdote from Facebook talks about an artificial intelligence program that after 24 hours can to the conclusion the “Hitler was right”. This makes it definitely a product of its inputs.
Science fiction fantasy writer Raymond E Feist asked ChatGPT what it knew about himself, it managed to have him living in the wrong place, incorrect age and that he was still married, even though he had divorced 22 years previously. The engine is only as good as the fuel you put into it. Outputs are not reliable.
Most recently, a lawyer in the USA is currently being remonstrated by a High Court Judge because he didn’t understand what ChatGPT is. https://www.abc.net.au/news/2023-06-13/us-lawyer-faces-sanctions-over-chatgpt-research/102474176
In a time where Cyber Safety and Psychosocial Safety are at the forefront of everyone’s mind, it is important to remember that the source of your information is vital. Random AI sourced information will not stand up in court if the information is not discerning. Bad advice that is given digitally is the same as bad advice given verbally. At least verbally you can cite the source.
Real Learning doesn’t come from asking a computer, computers are not capable of thought, emotion, embodiment, trust, error, or metaphor. Learning is not a transfer of data/content. You cannot question the outcome. The process of learning is critical to understanding. Most importantly, it is not transdisciplinary.
Philomath says
We have no choice but to accept AI because it is for the most case superhuman accurate, and you want accuracy when dealing with all things like healthcare diagnoses, legal opinions etc. The bigger question will be, will we ever understand how the results are derived with ever the increasing size and processing power. Under the current AI architecture I can’t see this happening. Accuracy and Understanding seem to be opposites in this struggle and accuracy is what makes money.
Rob Long says
Philomath, AI is just computing, don’t let the propaganda fool you. Without embodiment no computer can ever come close to the intelligence of a fallible person. Computers are NOT persons. All a computer can do is crunch data at speed, it can’t emote, feel or ‘think’. So, no, I do have a choice and I don’t have to accept the propaganda about AI. Accuracy and efficiency are not the goal of living and being and Technique sacrifices humanness and personhood for the delusion of Technique ideology. AI cannot be ethical either as there is no program for moral philosophy. Social relationship is most often toxified by the quest for efficiency and accuracy.
Rob long says
Poor olde safety, always ready for the next fad or jingle. Why seek substance when any delusion will do.
Narelle stoll says
With any artificial intelligence we should never just accept it but always question the source. The input of information will also always be static, based on individuals interpretation of the world in an conscious state. However learning is also based on the sensory input in the unconsicious state and it is by the interpretation of that information in response to our reactions and the engagement with others is how we learn. Using AI to source safety data and information is in my opinion just not sufficient. It is only through the engagement and critical dialogue with others in the way we are interpreting the world around us can the risk be truely explored.