The 5 dangers of AI according to Geoffrey Hinton, one of its pioneers

Geoffrey Hinton, a pioneer in artificial intelligence, resigned on Monday May 1 from Google after 10 years with the American giant

The 5 dangers of AI according to Geoffrey Hinton, one of its pioneers

Geoffrey Hinton, a pioneer in artificial intelligence, resigned on Monday May 1 from Google after 10 years with the American giant. In an interview with the New York Times, he reveals that he left to talk freely about generative AI like ChatGPT, without it impacting the company. He expresses his regrets about his involvement in the development of the latter and somehow consoles himself "with this common excuse: if I hadn't done it, someone else would have done it".

A graduate in experimental psychology and a doctor in artificial intelligence, this Londoner has devoted his entire life to research in AI. Whether as a researcher at Carnegie Mellon, the University of Toronto or at Google – which he joined as part of the Google Brain team in 2013 – Geoffrey Hinton has developed artificial neural networks – mathematical systems that learn new skills by analyzing the data they have access to. In 2018, he won the Turing Prize for these networks he created. Considered a major figure in the field of AI, he inspired the work of Yann Le Cun and Yoshua Bengio.

Repentant, he returns to the dangers of AI, which according to him would represent "serious risks for society and humanity", he explains in his interview with the New York Times. Here are his five biggest fears.

The competition between big tech giants has led to progress that no one could have imagined, according to Geoffrey Hinton. The speed at which advances are occurring has exceeded scientists' expectations: "Only a handful of people believed in the idea that this technology could actually become smarter than humans...and I myself thought it would be […] the case within 30 to 50 years, or even more. Of course, I don't think that anymore. »

From an economic point of view, the godfather of AI fears that this technology could drastically disrupt the labor market. “AI takes away the hard work,” he said, adding that “it could take away a lot more than that,” including translators and personal assistants. A job cut that will not spare the "smartest", despite what many think, believing they are safe.

The professor of Ilya Sutskever - the current scientific director of OpenAI - considers technological progress too fast, compared to the means we have to regulate the use of AI. "I don't think we should speed it up until we figure out if we can control it," says the researcher. He fears that future releases will become "threats to humanity".

According to him, the threat also comes from the misuse of AI by dangerous actors. It's "difficult to figure out how to stop bad actors from using it for evil purposes," he worries. Geoffrey Hinton is particularly opposed to the use of artificial intelligence in the military field. He particularly fears the development of "robot soldiers" by humans. In the 1980s, he even decided to leave Carnegie Mellon University in the United States because his research there was funded by the Pentagon.

Last, but not least, the threat of AI-related misinformation. This scientific effervescence, coupled with the massive use of AI, will make it almost impossible to discern "what is true from what is not". The scientist even speaks of a "bullshit generator". An expression that refers to the ability of AI to persuasively produce statements that seem plausible without being true.

So what solution? The neural network expert advocates interplanetary regulation, but he remains lucid: "It may not be possible […]. Just like with nuclear weapons, there is no way to know if companies or countries are working on them in secret. The only hope is that the greatest scientists in the world work together to find solutions to control [AI]. »