One of the world’s foremost experts on artificial intelligence (AI), Geoffrey Hinton, fears he has helped develop technology that could eventually threaten humanity. He is just the latest in a line of researchers to come forward and warn against a development that could prove fatal. – It is difficult to see how to prevent evil people from using AI for evil purposes, says Hinton.
Professor Geoffrey Hinton (75) has for a number of years been employed both at the University of Toronto and at Google.
Now he has resigned from his position at Google to be able to speak more openly about the risks associated with the technology he has helped to develop: artificial intelligence.
I don’t think they should scale this up more until they have understood how they can control it, he says in an interview with The New York Times.
To the BBC, he says that so-called text robots may soon become more intelligent than humans.
He points to several risk factors: the spread of disinformation, job losses, new “autonomous” weapons based on AI and actors who use the technology for malicious purposes.
In the long term, he fears that artificial intelligence may develop unexpected behaviour and pose a threat to humanity.
There have been several messages of concern from the research communities, it should be noted that it is those with the longest experience and greatest expertise who are sounding the alarm.
After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”
Then an even more exclusive group came on the field and warned.
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
Hinton has not signed any petitions. He resigned last month. Last Thursday, he spoke with the head of Google’s parent company, Alphabet, Sundar Pichai.
Hinton is an AI pioneer. His career began at the University of Edinburgh in 1972.
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analysing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s he was employed at Carnegie Mellon University in the USA. But Hinton refused to accept funding from the Pentagon and left for Canada. He is strongly opposed to the use of robots on the battlefield.
It was in 2012 that the breakthrough came:
In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyse thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
When researchers warn of the dangers of unregulated AI, it is because the race between Microsoft and Google is driving development. The restraint previously experienced is gone, now there is a battle to achieve dominance.
Upto last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.