Artificial intelligence can't dream - that's why it loses at chess

Fastest route to Delhi? Diagnoses in medicine? Food supply in Wuhan? Even complicated issues are child's play for artificial intelligence (AI): It learns from data, recognizes patterns and, as is well known, provides helpful services in more and more areas of life.

Artificial intelligence can't dream - that's why it loses at chess

Fastest route to Delhi? Diagnoses in medicine? Food supply in Wuhan? Even complicated issues are child's play for artificial intelligence (AI): It learns from data, recognizes patterns and, as is well known, provides helpful services in more and more areas of life. Now, critics of the often erroneous human decisions are asking whether we shouldn’t let AI make more decisions that have only been made by humans so far? This raises important questions of ethics, morality and responsibility. Much more fundamental, however, is the question of the extent to which this would actually lead to the desired result, namely better decisions.

Yes, computers calculate much faster than we do. You can sift through gigantic amounts of data in a short time and identify patterns in mountains of training data with pinpoint accuracy. When processing the algorithms, they do not suffer from cognitive deformations like we humans do, and they are consistent in their analysis. All this makes them interesting for decision-making at first glance.

But AI cannot go beyond the concrete and existing in its analysis and foresight. Only we humans can do that. While machines only see what is, humans can dream up what can be - and then make it a reality. Two examples for this:

Silicon Valley, California: A car went into a curve on a normal road; the driver now only had to steer slightly to the left. But something went wrong. The car went too far to the right and dangerously close to the curb. It braked sharply, but only when it was already scraping against the side barrier and threatening to veer off the road. Eventually the car came to a stop - on a thin line of purple pixels on the monitor. The minor crash was a digital simulation run on the computer servers of Waymo, Google's autonomous driving company.

The simulation aims to remedy a serious disadvantage in the development of self-driving cars: the lack of training data on situations that rarely occur in reality. For more than a decade, the industry has been collecting real-world data to train its AI models. Fleets of cars equipped with complex sensors and video cameras drive through streets and collect millions upon millions of data points per second. But they cannot cope with exceptional situations because the training data contains too few extreme situations: a plastic bag flapping against the window in the wind when the car is about to hit black ice, or a mattress that is suddenly lying in the middle of the road.

So Waymo developers invented an alternate reality full of exceptional situations. The system used to generate this synthetic data is called Carcraft - a homage to the computer game World of Warcraft. Carcraft offers 20,000 basic scenarios, including extremely rare and dangerous situations invented by humans. Every day, 25,000 virtual Waymo cars cover around 13 million kilometers. With success. In 2020, Waymo cars drove an average of more than 45,000 kilometers before human intervention - dramatically better than the competition.

But Carcraft's rare scenarios weren't created by machines. They were people who thought beyond the existing and thus created the informational basis for better self-driving cars.

Yes, the hyper-rationalists in Silicon Valley and elsewhere would like to leave driving and much else to the machines. But they forget: In the background, the AI ​​is still being pulled by clever people.

Our ability to dream up alternate realities and use them to make better choices is a trait machines would envy (if they could). AI will only be successful if tomorrow is like yesterday.

Machines are trapped in reality, and they also lack the ability for abstraction - the chance to see the forest and not just trees. London-based AI company Deep Mind provides an example of this. In 2018, it introduced an AI system called AlphaZero that learned the board games chess, go, and shogi just by playing against itself. The only human input were the rules of the game. After just nine hours, in which AlphaZero had played 44 million games of chess against itself, it was already beating the best traditional chess program in the world.

When grandmasters then competed against the system, they were amazed at its peculiar way of playing: For more than a century, chess experts had agreed on certain basic concepts and strategies, for example by assigning certain values ​​to individual pieces and positions on the board. In contrast, AlphaZero made radical moves, prioritized mobility over position, and had no qualms about sacrificing pieces.

Is it possible that after centuries of playing chess we humans are so stuck in our judgments about chess moves and positions that this new, successful strategy simply remained hidden from us while AlphaZero was able to extract it from the pattern of the training data?

The answer is: no. Sure, AlphaZero won games—but only because it mimicked equally successful patterns in the training data. It followed patterns without being able to explain them to itself and without deriving a more general strategy from them. An AI system cannot invent anything. We humans are getting smarter because we know how to learn from AI, because we develop general insights from individual experiences.

Especially at a time when the answers to the great challenges we face are not the conventional solutions of yesterday, we need the ability to think new and act differently. AI cannot do that, only we humans can do that by dreaming purposefully.

Austrian lawyer Viktor Mayer-Schönberger, 56, is Professor of Internet Regulation at the University of Oxford. This text is an adapted excerpt from his new book "Framers", which he co-authored with the American journalist Kenneth Cukier and the French computer scientist Francis de Véricourt.