Tuesday, 02 May 2023 11:03

The Dangers of AI; I Think I Have Seen this Movie Before

Written by

Reading time is around minutes.

If you are a fan of science fiction movies, then you have probably seen multiple movies where an AI (Artificial Intelligence) has gone mad and decided that humankind needed to be eradicated. Everything from the Terminator series, through to the Matrix warns us of the dangers of creating something that is smarter and more powerful than ourselves. Of course, these are works of fiction, but they do represent an understanding of humankind’s hubris when it comes to creating artificial intelligence.

However, we as humans are always bent on the new and next best thing ever. When we create, we often do not think about the implications of what we are building or how we are building it. This failure of imagination is one of our biggest blind spots. We may think we have thought of everything, but as is often pointed out, no plan or project survives first contact. A real-world example of this is the Metaverse. It sounds like a cool idea and Meta believed they had controls in place to prevent abuse. Neither the controls, nor the reporting systems held up how terrible humans can be.

AI is another of these items. Just look at the abuse that is being heaped on one of the more popular Large Language Models (LLM), ChatGPT. It followed the logical path that these types of things do; as it gained popularity, more and more people tried to see what they could get it to do. This led to the discovery of internal programing bias, which led to an understanding of how to get around safeguards put in place. The programing bias found in the system turned out to be a combination of unconscious, conscious, and protection bias. Each one of these contributed to making the model less and less factual and more opinionated. This bias is determined by the formatting of the questions put to the model.

For a movie parallel, grab a quick look at movies like Saturn 3. Here a robot AI has an interface with its human controller. The safeguard here is a comprehensive personality test to ensure that the controller is not a bad person. What was not imagined was that someone would kill and impersonate a tested controller. In Saturn 3 Harvey Keitel’s character does just that. He commits this act because he is infatuated with Farah Faucet’s character. This violate and murderous personality, combined with the infatuation/obsession gets synced with the robot and chaos ensues.

Now, I am not saying that ChatGPT is an obsessed, murderous, LLM. I am saying that individual programmers and the people that review and moderate the content that ChatGPT is exposed to will eventually “sync” their personalities into the system. No one sets out to do this, it simply happens as our unconscious biases come into play when making decisions.

The issue is larger enough that make thought leaders in technology have penned a letter warning of the dangers of AI. They asked for a six-month pause to review and identify any missed areas that might create future problems (like a murderous robot). Some do not thing that six months is enough.

This is the case with Geoffrey Hinton. He has announced this resignation from Google saying that he regrets his work on AI. In a recent interview with the BBC, he expressed his concern over how quickly AI has progressed. AI Chatbots like ChatGPT might not have the same reasoning skills as your average person, but their access to information eclipses most of them. This lack of reasoning combined with no capacity for empathy is cause for great concern.

Right now, many, if not all AI projects are limited in their functions and scope. Their popularity means they are gaining access to more and more input and data which is outpacing their development from a reasoning perspective. They also are (typically) not aware enough to think outside of the guardrails they are given. If this changed either as an test, or due to bad actors, these same systems could develop their own goals outside of the intended ones. These goals might be at odds with what their developers might want. If we combine these internally developed goals with input bias, a lack of empathy/morals, and instant access to other systems… well to quote another movie “It would be bad”

Read 1564 times Last modified on Tuesday, 02 May 2023 11:09

1 comment

  • Comment Link JP Tuesday, 02 May 2023 22:33 posted by JP

    Great post. Very interesting read but is the reality we are currently facing.

    Report

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.