"Smile" during autonomous driving: How can robot cars make themselves understood?

Robot cars have to communicate with their environment, with pedestrians, drivers and cyclists.

"Smile" during autonomous driving: How can robot cars make themselves understood?

Robot cars have to communicate with their environment, with pedestrians, drivers and cyclists. But what should that look like? Even non-verbal communication between two drivers is often complicated. As for autonomous driving, there has so far been a "proliferation" of ideas and approaches.

No voice, no facial expressions and no body language - robot cars will be the most difficult to understand road users in the near future. To ensure that the silent machines do not pose a safety risk at intersections and crossings in the near future, they must learn an understandable language with which they can communicate with pedestrians, cyclists and human drivers. What this could look like is currently being negotiated.

It has long been clear to developers and manufacturers that robotic cars need to communicate with their environment. Only when it came to the "how" was there a general lack of orientation. Each prototype and each study followed its own approach: Audi, for example, experimented with projections, light symbols that the car casts onto the road using the headlights and is then read by other road users. The Swedish supplier Semcom relied on a screen face on the radiator grille that smiles friendly at passers-by when they should cross the street. Similarly human: Jaguar's huge electric eyes. Including the eyelid, iris and pupil, they should make eye contact with other road users.

As coherent and interesting as the ideas are, wild growth doesn't work on the road. When autonomous cars hit the streets in large numbers, they need a common language. It is inconceivable that every car manufacturer would establish its own solution.

Engineer Stephan Cieler is looking for a universal form of communication for robotic cars. The developer at the supplier Continental works, among other things, in the ISO standardization committees on the standardization of so-called external and dynamic human-machine interfaces ("Human Machine Interface"). He is concerned with how robotic vehicles and living road users can exchange information in the safest way in the future. For example at the zebra crossing, where the most important question is whether the car really stops when you step onto the road. Or at bottlenecks where two drivers have to agree on the right of way.

Cieler and his team initially developed 17 rules for human-machine interfaces. Something like the basic set of rules for communication between the car and its living environment. The car's signals must be unambiguous and also correspond to the other behavior of the vehicle. If the car indicates to another road user that it is waiting at an intersection, it is not allowed to accelerate briefly to the stop line. Furthermore, once a message has been set, it cannot be changed again in the course of communication. In addition, it must be ensured that signals are always clearly recognizable for other road users in all weather and light conditions.

The end result is an amazingly simple language mechanism. "We believe that communication has to be visual, very, very simple and that only a few messages are necessary or meaningful," says Cieler. This would also reduce the risk of misunderstandings. "If the car should signal 'I've seen you', for example, that might still make sense for a single passer-by. But who does the vehicle mean in a group or a crowd," says the engineer, giving an example.

He uses the same argument against the communication of instructions or requests - who is meant as the addressee in such cases cannot always be determined with certainty in the urban traffic jam for other road users. In the end, therefore, only two messages are needed that the robot car sends out into the environment: "Autonomous driving active" and "I give you priority".

Cieler has a suggestion for the concrete design of the robot car HMI. This is also impressively simple: a turquoise band of light around the vehicle indicates that it is in autonomous mode. When it flashes, it tells you to stop and yield or give way to the other. Turquoise because, according to Cieler, the color, unlike blue or yellow flashing lights, has not yet been used in traffic. And because it does not imply any instructions - such as a red traffic light or a green that suggests free driving.

It remains to be seen whether Cieler's approach will gain acceptance in industry, science and politics. That is not entirely unlikely, according to the researcher, but the approaches of other bodies are also going in a similar direction. They all want to be simple, intuitive and generally understandable. Cieler estimates that it will be around ten years before there is at least a European, if not global, standard. A time horizon that many experts also use for the market penetration of autonomous driving.