Autonomous combat drones coming?: Ukraine war fuels development of killer robots

Killer robots are the future - at least when it comes to Russia and the USA.

Autonomous combat drones coming?: Ukraine war fuels development of killer robots

Killer robots are the future - at least when it comes to Russia and the USA. The war in Ukraine is driving the development of autonomous weapon systems. But what are the dangers of drones that can independently identify, select and destroy targets?

It would be the beginning of a new era of warfare. Military experts, politicians and scientists have been discussing the potential of autonomous combat drones for years. The Russian invasion of Ukraine has given fresh impetus to the debate. And: The longer the war drags on, the more likely the use of machines that independently identify, select and destroy targets on the battlefield could become.

This is also indicated by a recent statement by the US Department of Defense. In it, the US military affirmed its intention to push ahead with the development and use of autonomous weapons. It is the first time in almost a decade that the ministry has focused on artificial intelligence (AI) weapon systems. The announcement follows a NATO implementation plan agreed in 2022 aimed at preserving the alliance's "technological lead" in so-called killer robots.

Neither Ukraine nor Russia are yet using AI weapons on the front lines. However, that could change soon. Ukraine already has semi-autonomous drones that use AI to fend off enemy drones. Ukraine's minister for digital transformation, Mykhailo Fedorov, is certain that killer robots are the "logical and inevitable next step". His country has done "a lot of research and development in this direction," he told the AP news agency. "I think there's a lot of potential for that over the next six months."

According to its own statements, Russia is also developing autonomous weapons. Unmanned "marker" ground vehicles are to be retrofitted with appropriate weapon modules so that they can be used against tanks such as the Leopard 2 or the M1 Abrams. The AI ​​should be able to differentiate between the tank types and also be able to judge autonomously whether the target is a civilian or a combat vehicle. So far, however, there is no evidence of such abilities.

But Russia could certainly get AI weapons from Iran, for example. The Iranian-made Shahed drones already in use in large numbers by Moscow have destroyed power plants and terrorized the Ukrainian population - but they are not considered particularly intelligent. But Tehran says it has other types of drones in its arsenal, some of which work very well with AI.

Ukraine, on the other hand, could easily convert its semi-autonomous combat drones into completely independently operating weapons, according to Western manufacturers. With these drones - including the American Switchblade 600 and the Polish Warmate - a human has always selected the target with the help of a live image transmission. An AI system then does the rest.

"The technology to carry out a fully autonomous mission with Switchblade is basically already available," said Wahid Nawabi, head of the manufacturer AeroVironment, the editorial network Germany. Current systems are already able to identify targets such as armored vehicles by comparing them with stored images. However, it is disputed whether the technology is sufficiently reliable to completely rule out the possibility of a machine making a mistake and then killing people who may have nothing to do with current combat operations.

According to Nawabi, the use of AI weapons requires a political change of course that allows people to be taken out of the corresponding "decision loop". After all, computers could react much faster than humans. Proponents of fully autonomous weapon systems also argue that the technology could prevent many deaths by keeping its own soldiers off the battlefield. In addition, military decisions could be made at superhuman speed, greatly increasing defensive capabilities.

Russian President Vladimir Putin is also among the supporters. As early as 2017, he emphasized the importance of AI systems for future warfare. He said at the time: Whoever dominates this technology will dominate the world. And judging by the latest statement from the US Department of Defense, the White House takes a similar view.

Organizations such as the Campaign to Stop Killer Robots, on the other hand, are critical of the removal of taboos from killer robots. They fear a future in which autonomous weapon systems are designed specifically to fight humans, not just vehicles, infrastructure, and other weapons. On their website, the activists argue that life and death decisions must remain in human hands in times of war. Leaving them in the hands of an algorithm would be the ultimate form of digital dehumanization.

The human rights organization Human Rights Watch sees another danger: Such autonomous technologies lower the inhibition threshold for armed conflicts by reducing the perceived risks. She also accuses the US, Russia, China, South Korea and the European Union, among others, of plunging the world into a costly and destabilizing new arms race by investing more in autonomous weapons systems. The result could be that the powerful technology falls into the hands of terrorists, for example.

The US Department of Defense attempts to address some of these concerns in its notice. It states that the US will deploy autonomous weapons systems with "a reasonable degree of human judgment in the use of force." However, Human Rights Watch complains that it does not clarify what the term "reasonable level" means. In addition, it is not said who should determine this measure.

And even if the US sets strict guidelines for using killer robots, there is no guarantee that countries like Russia or China will comply. Attempts to establish international rules have always been unsuccessful in the past. Meanwhile, not only Russia rejects bans, the USA is also strictly against it.

Australian activist and scientist Toby Walsh hopes there will be agreement on at least some areas of rules for AI weapons - such as a ban on systems that use facial recognition and other data to identify and kill specific people or groups of people . "If we're not careful, they will spread even more easily than nuclear weapons," the author of Machines Behaving Badly told the AP. "If a robot can be made to kill one person, it can be made to kill a thousand."