sidebar image

AI in war

Autonomous killer drones are trained in dangerous ways

Posted on Bluesky and X/Twitter, 2 January 2025.

Huge amounts of training data for military AI are generated in Ukraine. It is a unique asset, especially for fighting in similar circumstances. But in a more dynamic conflict, it can cause AI systems to make deadly mistakes.

Video from 15.000 drones on the front line is captured and stored in Ukraine, amounting to 2 million hours of battlefield footage since the war started. The footage is invaluable for teaching AI to interpret what is happening on a battlefield. The AI model essentially observes what human drone operators have focused on, what they have identified as a target, what they have tried to hit. AI learns to do all that, too – but once the training is complete, it can act much faster.

Mimicking the soldiers doesn't come naturally to AI, however. "Humans can do this intuitively, but machines cannot, and they have to be trained on what is or isn't a road, or a natural obstacle, or an ambush", Samuel Bendett, an expert on military drones, told Reuters news agency. A ton of training data is needed to make that work. Ukraine is one of two countries possessing it. The other one is Russia.

After training, ideally, AI would identify targets and point them out to the soldiers. They benefit from the speed and accuracy of the model. Humans are kept in the loop in that case. But from a military perspective, the downsides of a human presence are significant.

Drone operators use transmitters. Their radio emissions can be identified, located and attacked by the enemy. Because drones are so deadly, their operators have become high-priority targets for the opposing side. Remote control of the drones is disrupted with electronic warfare. The effects of such radio interference can be seen in this video.

Things have become so bad that an ever-growing number of drones are operated via fibre-optic cable. A thin wire, which can be up to 20 km long, connects the drone to its pilot. The wire limits the flexibility of the drone but makes it immune to jamming. The low-tech solution works, at least as a crutch. It would be far more effective to get the human out of the loop entirely. An autonomous drone does not expose the soldiers who launch it. Its manoeuvres are not restricted by a cable. But there are downsides – and not just ethical ones.

AI-driven drones can be taught to identify targets on their own. But that knowledge cannot be adjusted on the fly. If the targets change their appearance, e.g. with altered camouflage, the AI model has to be retrained. In the past, in order to confuse machine-learning algorithms, the Russians have put tyres on aircraft wings. To a human, the object still looks like an aircraft. But there is a fair chance that an AI model chokes on it.

Retraining a model takes time. And before that even starts, there must be enough training data which is up to date and reflects the new type of camouflage. The lesson here is that it might be faster, easier and cheaper to outsmart an AI model than to make it catch up again.

The second issue is much more concerning. Training data of the kind gathered in Ukraine is bound to be lopsided, and heavily so. Even if the resulting AI model performs really well when selecting targets, it is primarily trained on footage taken on battlefields. Such a model won't be able to distinguish soldiers lying in ambush from kids playing hide-and-seek because kids don't do that on a battlefield. If patterns like that don't show up in the footage, they are not learned.

One key lesson of history is that the developers of a new weapon have zero influence over how their tool will be used. Commanders and politicians will make decisions without asking or reading the fine print. Evil intent is not even required. An autonomous drone which is trained for use over a heavily contested, largely static battlefield might fail horribly after a breakthrough, with civilians trapped in the middle. Even if a commander is aware of these shortcomings, they won't matter much when things go south.

Now imagine that same drone being used in guerrilla warfare. Or the export version being deployed in a civil war. Non-combatants everywhere, except in the training data. The limitations of the AI model will long be forgotten. The consequences can be horrific.

Drones using AI for their terminal approach are churned out in large numbers already. On-board intelligence keeps them moving towards the target once jamming cuts off the connection and contact to the pilot is lost. At present, humans still decide what they bring into the cross hairs, but truly autonomous drones seem inevitable now. Limits on their use are wishful thinking. So it is crucial that commanders fully understand what they deploy. They cannot see what kind of "brain" is implanted into an autonomous drone. They cannot see the training data. They must be told.

Lack of clarity can be deadly. That is why the responsibility for deploying the new tools of war must be assigned without allowing for wiggle room. Confusion about how AI arrives at a decision, with potentially devastating results, doesn't qualify as an excuse. Autonomous systems do their thing, but a human authorises their use. And that human must know – be forced to know – whether the go-ahead is defensible or leads to carnage among civilians. If it does, picking an unsuitable AI should be treated the same as ordering the killing.