“This technology is our future threat,” warns Serhiy Beskrestnov as he inspects a newly captured Russian drone. It is not an ordinary weapon, he says. Equipped with artificial intelligence, it can independently search for and strike targets without human direction.
Beskrestnov, a consultant to Ukraine’s defence forces, has examined many drones during the conflict. But this one stands out. It neither sends nor receives signals, making it immune to jamming and detection.
Both Ukrainian and Russian forces now race to develop smarter, faster and deadlier AI systems. They use them to locate targets, gather intelligence and clear mines from the battlefield.
Artificial intelligence takes control of the front
For Ukraine’s army, AI has become an essential ally. “Our military receives more than 50,000 video streams from the front every month,” says Deputy Defence Minister Yuriy Myronenko. “Artificial intelligence analyses the footage, identifies targets and places them on a map.”
The technology enables faster decision-making and helps commanders manage resources more efficiently. Most importantly, it saves lives. But it is also transforming warfare. Ukrainian troops now operate drones that lock onto targets and complete their final attack path autonomously.
These drones are too small to detect and impossible to jam. Experts believe they will soon evolve into fully autonomous weapons capable of identifying and destroying targets without human control.
Drones that decide and strike
“All a soldier needs to do is press a button on a smartphone,” explains Yaroslav Azhnyuk, CEO of Ukrainian tech company The Fourth Law. The drone will find its target, drop explosives, verify the result and return to base. “It won’t even require piloting skills,” he says.
Azhnyuk believes this technology could greatly enhance Ukraine’s air defences against Russian long-range drones such as the Shaheds. “A computer-guided system can outperform humans,” he adds. “It sees faster, reacts quicker and moves more precisely.”
According to Myronenko, Ukraine is close to finishing development of such systems. “We have already integrated parts of it into several devices,” he says. Azhnyuk predicts thousands of autonomous drones could be active by the end of 2026.
The blurred line between safety and danger
Despite its promise, full automation brings serious risks. “AI might not tell a Ukrainian from a Russian soldier,” warns Vadym, a developer who declined to share his surname. “Both can look identical on the battlefield.”
Vadym’s company, DevDroid, builds remotely controlled machine guns that use AI to detect and track people. But because of the risk of friendly fire, they have disabled automatic shooting. “We could enable it,” he says, “but we need more experience and feedback from troops before deciding when it’s safe.”
Ethical and legal concerns are growing. How can an autonomous system obey the laws of war? Can it recognise civilians or soldiers who wish to surrender? Myronenko believes a human should always make the final decision, even if AI helps process information faster. Still, he admits that not every army will respect those boundaries.
A global race with no finish line
The emergence of AI in warfare has opened a dangerous new chapter. How can traditional weapons stop swarms of intelligent drones that cannot be jammed or intercepted?
Ukraine’s “Spider Web” mission last June, when 100 drones attacked Russian air bases, reportedly relied on AI coordination. Many Ukrainians now fear that Moscow will replicate this tactic, both on the front and far beyond it.
President Volodymyr Zelensky recently warned the United Nations that AI is driving “the most destructive arms race in human history.” He called for urgent global rules to govern the use of AI in war, stressing that the issue is “as critical as stopping the spread of nuclear weapons.”