While AI like ChatGPT and Google Bard have certainly been impressive, not everyone is thrilled. AI experts have called for “digital health warnings,” and Elon Musk joined many others in the industry in signing an open letter calling for an AI arms race pause.
Now, one of Google’s own is joining the anti-AI side. According to The New York Times, Dr. Geoffrey Hinton — who pioneered the use of neural networks in AI — has left Google after more than a decade with the company. His reason? So he can speak out freely against the rise of AI.
For the record, Dr. Hinton doesn’t seem to be concerned about a Skynet-like scenario a la The Terminator, mostly. There is one moment where he expresses concerns about autonomous weapons being an issue 30 to 50 years from now. But for the most part, his concerns are about the present-day and near future: lack of control, misinformation and automation.
An AI arms race could lead to humanity's downfall
Despite feeling as recently as a year ago that Google was a “proper steward” for AI technology, the rise of chatbots and the way they are abused has seemingly changed his mind. He now views Google and Microsoft in an AI arms race that is impossible to stop while the average person still struggles to differentiate between AI-created and human-created content.
Hinton's concerns about AI growing at an alarming rate aren't novel. In fact, they were at the core of the open letter mentioned early that stated AI posed “profound risks to society and humanity.” And while Hinton did not sign that letter, it's clear he now agrees.
The tipping point appears to have been last year with the rise of OpenAI and Google's own work with large language models (LLMs). As these models became trained on larger and larger amounts of data, Hinton became convinced that these AI were “actually a lot better than what is going on in the brain” and that their potential five years down the line was frightening.
“Look at how it was five years ago and how it is now,” Hinton said of A.I. technology to the Times. “Take the difference and propagate it forwards. That’s scary.”
AI automation could destroy jobs
This brings us to Dr. Hinton's second concern — automation. There's a real concern that AI will eventually take most of our jobs and send society cratering. And while we are still likely some distance from that possibility, if it ever comes, we've already seen some evidence of ChatGPT and other AI taking away work.
Most of this work is what Hinton refers to as "drudge work" and seems to be eliminating tasks at the moment rather than jobs. But Hinton thinks that soon these AI could start taking away real jobs.
He also thinks automation could lead to grave missteps. He is concerned that as companies allow AI to not only create code but run that code that they could become smarter than people and become autonomous weapons that we no longer can control.
Misinformation is already rampant in AI
Finally, Hinton is concerned about misinformation. “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said to The New York Times.
To be fair, this is a very real, immediate concern. We've seen bad actors run rampant with AI tech, from cracking passwords to faking kidnappings and more. We are also already speculating when AI and human text will be indistinguishable, and Hinton thinks this moment is coming soon, saying that the average person will “not be able to know what is true anymore.”
This moment, frankly, is already here. Whether or not you agree with Hinton's more alarmist concerns, companies are already reacting to deepfakes now. AI image generator Midjourney halted free trials over viral deepfakes earlier this year after AI-generated images of Donald Trump’s arrest and Pope Francis wearing a Balenciaga puffer jacket were accepted as gospel.