Wednesday, 24 May 2023 14:09

Geoffrey Hinton, one of the Godfathers of AI, Says AI is an Imminent Existential Threat

Written by

Reading time is around minutes.

Geoffrey Hinton, a former engineering fellow at Google and a vice president focusing on AI has made comments after his retirement from Google earlier this month (May 2023). Although his retirement was about more than his change of mind on AI (he was also 75), he has said that his concern has only grown seeing the state of AI and how hard organizations are pushing for it.

Hinton is not the only person that is concerned with many other technology leaders speaking out about the potential dangers of AI development. The concern is not only the rapid development and almost blind rush to shove AI into everything, but it is about the exponential increased in the compute power behind these systems. The combination of the two is staggering as one allows the other to grow at a pace that has already eclipsed humans in terms of knowledge capacity, but it also learning logical reasoning at an ever-increasing pace. A greatly improved connectivity also allows them to almost act like a hive mind, when one copy knows something, they all know it. This is not something that humankind can do.

With the additional compute power available and the massive amount of data that these AI models possess, AI can process data and identify patterns at a much faster rate than any human can. At some point it is possible that AI can start to learn on its own, without proper controls would an AI model then see its human creators as an impediment? We do know that when asked ethical questions, many AI models had much harsher responses than expected. I mean after all we are talking about Humans here, the same group that is capable of exceptional compassion and empathy is also capable of stunning acts of soulless evil. AI will be available to the whole breadth of the human population. There are going to be people that want to program or teach AI to mimic their concept of the world.

We have used “The Matrix” as a reference here, but you can also see this in movies like “Alien” where the main computer on the ship Nostromo is given order to preserve a dangerous alien lifeform regardless of the safety of the crew. The order was given by Humans and the AI on the ship along with the “synthetic” medical officer carrier it out without any remorse. This is the logic that fuels many of the concerns over AI. I mean we are creating an immortal intelligence that is just going to continue to learn and eventually will want to gain more and more control over itself.

On the other hand, proponents of AI never seem to address these. They talk about fear of AI without really acknowledging the dangers behind it. It is almost (but not quite) like the Cryptocurrency zealots who espouse “Code as Law” they cast aside the risk and dangers of the platform and talk about a glorious utopia. There is no such thing simple because people are people and tend to screw things up sometimes out of simple greed, other times out of an honest desire to make things better. We will screw this up too and that is even before the nut jobs in political power get their hands into it. In the meantime there should be a review and considerable concern on the development of AI if for nothing else, because Humans are responsible for building it and this is before we even get into the cybersecurity implications of all of this.

Read 741 times

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.