In 2012, Geoffrey Hinton and two of his graduate students from the University of Toronto created technology that has become the foundation of Artificial Intelligence systems used by some of the biggest tech companies in the world. Now, Hinton has left his job at Google and is warning many about the risks of AI technology, stating that he now regrets his life’s work.
Geoffrey Hinton became a pioneer in the tech industry after creating technology with two of his graduate students at the University of Toronto that would become the foundation of the Artificial Intelligence (AI) systems that the world’s biggest tech companies regard as the key to the future of the industry.
Dr. Hinton, however, recently quit his job at Google where he’s worked for the past decade, so he could freely speak about the dangers of AI technology and the way it’s developing. Dr. Hinton was recently interviewed by the New York Times, where he cited that he now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he stated. Industry leaders have been boasting about the amazing development of AI, stating that these systems could be as important to the industry as the introduction of the web browser in the 1990’s. With its development, however, has come a lot of backlash due to the potential dangers this technology could lead to.
“It is hard to see how you can prevent the bad actors from using it for bad things.”
More than 1,000 technology leaders recently signed an open letter calling for a six-month pause on the development of new AI systems due to their “profound risks to society.” The signing came after the release of a new version of ChatGPT in March from the start-up OpenAI.
In addition to this open-letter signing, 19 current and former leaders of the 40-year-old academic society the Association for the Advancement of Artificial Intelligence released their own letter where they also warned of the risks that come with AI. Dr. Hinton didn’t sign either letter as he didn’t want to speak out publicly regarding these risks until he was away from Google.
In 2012, Dr. Hinton, Ilya Sutskever, and Alex Krishevsky, his graduate students, built what they call a neural network that could analyze thousands of photos to teach itself how to identify common objects. A neural network is described as a mathematical system that learns skills by analyzing data, a concept that was never truly explored by scientists before Dr. Hinton due to a lack of belief in the success of the network.
Google would go on to spend $44 million to purchase the company started by the three developers. This acquisition led to the creation of technologies like ChatGPT. As Google and other companies continued to develop neural networks, Dr. Hinton grew more worried over the power this technology could hold.
“Maybe what is going on in these systems is actually a lot better than what is going on in the brain. As companies improve their A.I. systems, they become increasingly dangerous. Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.”
“Until last year, Google acted as a proper steward for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop,” Dr. Hinton said to the New York Times.
One of the biggest and most common concerns that Dr. Hinton expressed has to do with the accuracy of these AI generated pictures, videos, and text. He explained his worry that the “average person will not be able to know what’s true anymore.”
Additionally, there’s major concern that this new technology will have a major impact on the job market and the occupations that could be completed using AI, leaving a large population of people potentially out of a job.
“The idea that this stuff could actually get smarter than people — a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that,” he said.
“Unlike with nuclear weapons, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Eric Mastrota is a Contributing Editor at The National Digest based in New York. A graduate of SUNY New Paltz, he reports on world news, culture, and lifestyle. You can reach him at email@example.com.