GPT-4: "It's time to slow down the development of artificial intelligence"

Pattie Maes is a professor in the Media Arts and Sciences program at MIT Medialab in Cambridge near Boston

GPT-4: "It's time to slow down the development of artificial intelligence"

Pattie Maes is a professor in the Media Arts and Sciences program at MIT Medialab in Cambridge near Boston. As a teacher at the Center for Neurobiological Engineering, she is interested in how brain-computer interfaces improve memory, attention, learning, decision-making, and sleep. An exciting area, as advances in brain imaging help us learn more about artificial intelligence. We asked the head of the Media Lab's Fluid Interfaces research group at MIT what she thinks of GPT-4, the newest language model from Open AI, the California-based company that developed ChatGPT .

Among other feats, this Swiss army knife allows you to code a replica of the Pong video game in 60 seconds, to write a lawyer's pleading, or even to give sommelier and investment advice. The program would even be able to beat 90% of lawyers in the test to become a lawyer. What is the reaction of the researcher with a doctorate in artificial intelligence from the Vrije Universiteit Brussel in Belgium to these puzzling advances? “I think it would be more useful to build systems to help people become smarter than to seek to build machines that can match, surpass and replace us. » Interview.

Pattie Maes Yes. The current form of artificial intelligence has nothing to do with true intelligence. This is not an approach that will one day lead to general intelligence, as these language patterns have no understanding of the world, they cannot reason, etc. However, this does not mean that these systems are not useful. Many interesting tools for going towards summarizing a text, helping with writing and editing, developing a storyboard, learning languages ​​can be built with these technologies.

Are you impressed with the performance of the ChatGPT chatbot?

What does GPT-4, OpenAI's latest language model, which is currently making headlines technically?

Linguistic abilities seem to have been greatly improved, but it is still based on the same method of statistical word prediction. This method is limited in that the system has no real model or understanding of the world. Open AI always says that it "hallucinates" from time to time and spits out false information. But in addition to her enhanced language skills, she also has multimodal capabilities, i.e. she can look at pictures and tell you what's in them. I'm curious how deeply these two abilities are integrated with each other.

Some people think AI could get a lawyer's degree, do you think some jobs are in jeopardy and when?

It's definitely going to change a lot of jobs, but I think it's more likely to see a lawyer or a doctor or whatever using this system to become much more efficient, rather than the system replacing jobs, especially for important jobs like that. However, for low risk and low importance tasks, where the cost of an occasional error is not so high, jobs could be replaced. For example, customer service on websites where you buy things, etc.

Should we establish ethical rules on AI developments?

Yes. I believe that the choices that are made about where to go in the development and deployment of artificial intelligence should not be reserved for engineers and start-ups in Silicon Valley. Artificial intelligence will have a profound impact on our society and our world. It will concern us all. Why should engineers decide how we want to change our society? Why should engineers and entrepreneurs decide the future we want to create and live in? AI developers often cite economic reasons to justify development, but several economists have argued that it would be far more beneficial to the economy to aim for increasing human capabilities rather than replacing workers with technology. AI. This is explained by researcher Erik Brynjolfson, professor at Stanford, in his article "The Turing Trap". (1)

In addition to the economy, LLMs (an acronym for the large language models that played a key role in the birth of ChatGPT, editor's note) and chatbots will deteriorate the social and political landscape. The problem of "fake news" will get worse, the erosion of truth will continue, and political polarization will increase. LLMs and chatbots also risk weakening our social fabric, with people relying on virtual friends for conversations and intimacy, rather than real people. I think we should slow down the development of AI and take the time to think about the societal implications of what we are creating, rather than creating AI systems just "because we can", without think about the consequences.

(1) In this article, the researcher explains that as machines better replace human labor, workers lose their economic and political bargaining power and become increasingly dependent on those who control technology. This is what the researcher calls the Turing trap. In contrast, when AI focuses on augmenting human capabilities rather than emulating them, humans retain the power to demand a share of the value created.