August 21, 2025
Artificial intelligence is rapidly changing modern warfare, as seen in the Russia-Ukraine conflict. Drones now cause 70–80% of battlefield casualties, with both sides developing AI-powered targeting systems. AI has boosted first-person view drone strike accuracy from 30–50% to around 80%. David Kirichenko points to the rising ethical concerns about machines making life-or-death decisions. While AI increases lethality, human involvement in logistics and final judgment remains crucial for battlefield operations. The conflict is also providing extensive battlefield data to train these systems and shape the future of warfare.

Nations are racing to integrate AI into military operations, with Ukraine and Russia at the forefront of developing autonomous systems for battlefield advantage.

The rapid advancement of artificial intelligence (AI) is transforming industries at an unprecedented pace, and the business of warfare is no exception. AI-enabled weapons are reshaping modern warfare with significant implications for strategic planning, battlefield operations, and the ethical employment of military force. Nations are racing to integrate AI into military operations, with Ukraine and Russia at the forefront of developing autonomous systems for battlefield advantage. But as they integrate AI into combat, one critical question remains: how much should we rely on it, and at what risk?

This concern is not just theoretical. As Austrian Foreign Minister Alexander Schallenberg warned, This is the Oppenheimer moment of our generation.” Just as nuclear weapons redefined warfare in the twentieth century, AI-enabled weapons are now reshaping battlefields—most notably in Ukraine. Speaking at a Vienna conference on autonomous weapons, Schallenberg warned: AI-driven warfare could spiral into an uncontrollable arms race. Autonomous drones and algorithm-driven targeting systems threaten to make mass killing a mechanized, near-effortless process.

The AI Arms Race in Ukraine

Ukraine is already locked in an AI-driven drone race against Russia, with both sides leveraging autonomous technologies on the battlefield. Faced with Russia’s numerical superiority, Ukraine turned to drones early in the war, forcing Moscow to follow suit. In the Russia-Ukraine war, drones now account for roughly 70-80% of battlefield casualties. General Valerii Zaluzhnyi, Ukraine’s former commander-in-chief, noted that many of Ukraine’s drones rely on commercial components and open-source software, enabling low-cost attrition warfare.

The contest between electronic warfare capabilities and drone operators led both sides to innovations such as the adoption of fiber-optic cables to bypass jamming, but countermeasures to disrupt that adaptation are already in development. Now, the next phase of drone warfare is taking shape: AI-powered targeting systems allow drones to identify and strike targets with minimal human intervention, even in heavily jammed environments. This is now a fight for drone supremacy, with both Ukraine and Russia seeking technological breakthroughs wherever possible. In this environment, AI-enabled drones will evolve, potentially turning warfare into a battle of algorithms.

Civilian innovation is also playing a key role. Volunteer groups, such as Victory Drones, are working to integrate AI into drone platforms, reducing the cost of last-mile targeting. According to Lyuba Shipovich, CEO and co-founder of Dignitas, AI-based targeting can now be added to drones for as little as 1,000 Hryvnia (about $25). As these models evolve through battlefield use, they are becoming increasingly precise. However, AI targeting systems only succeed when tested and trained with real end users in combat. Viktor Sakharchuk, CEO of Twist Robotics, noted in an interview with the author in April 2025 that poorly trained “build-your-own” drone kits flooded the front in 2024 but quickly failed in the field, highlighting that without training, even good targeting technology is useless.

But the AI arms race isn’t confined to the skies. Ukraine recently conducted large-scale testing of over 70 domestically developed unmanned ground vehicles (UGVs). The trials assessed the UGVs’ technical reliability, battlefield readiness, and ability to perform under extreme conditions, including electronic warfare and long distances from the controller. Most systems exceeded expectations, with several already in use by elite Ukrainian military units. Shipovich emphasized that the focus is on testing a wide range of UGVs, gathering feedback, and making iterative improvements. This represents the early stage of developing fully autonomous UGVs for battlefield deployment.

Ukraine’s drive for technological self-sufficiency has never been more urgent as Kyiv races to deploy a 15-kilometer unmanned “kill zone” along the front, with ambitions to extend it up to 40 kilometers. The initiative, part of the Defence Ministry’s “Drone Line” project, aims to render movement impossible for Russian forces without detection by tightly integrating aerial reconnaissance with ground-based operations. Ultimately, Valerii Boroyyk, commander of Ukraine’s White Eagle drone unit, foresees a future of coordinated drone swarms executing missions autonomously. The result is a new kind of no man’s land—one increasingly saturated with semi-autonomous drones programmed to seek and destroy anything that moves. Ukraine has no choice but to fight this war. Yet, in doing so, it is shaping the future of warfare.

Training models that will feed autonomous systems require extensive battlefield data, of which both Ukraine and Russia have plenty. Oleksandr Dmitriev, founder of OCHI, a Ukrainian non-profit that centralizes video feeds from over 15,000 frontline drone crews, told Reuters that since 2022, his system has collected 2 million hours—equivalent to 228 years—of battlefield footage. From this, the model is trained on “combat tactics, spotting targets and assessing the effectiveness of weapons systems.” Yet even with increasingly powerful data sets, human judgment still matters.

Danylo (“Kasper”), a drone pilot with Ukraine’s 108th Territorial Defense Brigade, insists that humans must remain involved in drone operations, especially in life-or-death decisions. He believes AI-guided systems can assist with targeting, but only after a human identifies the threat. Similarly, Vasyl (“Whiskas”) of the 128th Mountain Assault Brigade warns against fully autonomous targeting, arguing that algorithms shouldn’t be trusted with human lives.

Regardless, AI is already making its presence felt on the battlefield. In fact, AI integration has already boosted Ukraine’s FPV drone strike accuracy from 30–50% to around 80%, signaling a growing role for machine intelligence on the battlefield. With the groundwork laid, AI’s role in warfare is evolving even further.

AI’s Next Evolution in Combat

One Russian military blogger writing on Telegram warned that AI will eliminate traditional forms of warfare, making camouflage, deception, and electronic countermeasures nearly impossible. Dan Skinner, a former Royal Australian Infantry Corps officer, argued that the future battlefield will be greatly influenced by the ability to manage one’s visual signature, warning that without effective multispectral concealment, forces will be instantly detected, targeted, and destroyed by AI-driven sensor systems.

But even the best machines are not self-sufficient.

The future of warfare may hinge on which side can establish a superior “hive mind”—a network of machines working together seamlessly and adapting to battlefield conditions faster than any human operator could manage. But that doesn’t mean human involvement is coming to an end. As the battlefield shifts toward autonomous systems, human-driven logistics and maintenance will remain indispensable. But even the best machines are not self-sufficient.

On “The Russia Contingency” podcast, Michael Kofman pointed out that deploying autonomous systems presents real-world challenges: What happens when these machines malfunction or break down? Even the most advanced drones and robots will still require humans to repair, maintain, and redeploy them. While AI and automation may revolutionize frontline capabilities, the logistical backbone—managed by human hands—will remain essential to sustaining battlefield operations. In essence, humans will continue to remain in control of battlefield decisions and how wars will be fought. AI will be a tool to increase the lethality of those operations.

AI–Not a Panacea

Some experts argue that AI could help reduce unintended casualties in war, particularly in urban settings, while others warn that it may act with ruthless precision or use faulty data to target civilians. It has yet to be seen which side is right but what is indisputable is that AI will play a role in how military operations are conducted and thus will have to deal with civilians on the battlefield.

Another challenge is the risk of over-reliance on technology. Trenton Wheat, Chief Geopolitical Officer at Insight Forward and an adjunct professor at Georgetown University pointed to historical examples where militaries prioritized technological dominance, only to be outmaneuvered by low-tech countermeasures. For instance, Wheat pointed to the War on Terror, where the United States led in signals intelligence, so al-Qaeda adapted by using paper communications and couriers to avoid detection. Similarly, in the Second Lebanon War, Hezbollah employed fire-suppression blankets to conceal missile launch sites, preventing Israeli airstrikes from effectively targeting them. While AI and autonomous weapons will undoubtedly enhance battlefield lethality, Wheat noted, “leaders should never forget that imagination and low-tech solutions can undermine this advantage.” Ukraine also most recently proved this to be true with Operation Spiderweb in early June. It smuggled in loads of drones to various airfields around Russia. It then launched the drones, damaging 34% of Russia’s long-range bomber fleet, which included the use of AI-targeting to assist in hitting the aircraft in its most vulnerable spots. The result was $7 billion in damages for Russia. However, the ethical debate of AI in warfare is not limited to theoretical risks.

Paul Lushenko, assistant professor at the U.S. Army War College, highlights Israel’s use of AI-driven targeting in Gaza as a real-world example of how AI is shaping battlefield decisions. Machine-learning algorithms trained on military data can predict enemy positions, analyze tactics, and optimize strikes. However, Lushenko warns that integrating AI into lethal operations raises serious ethical concerns, especially with autonomous weapons. Israel’s use of the AI system Lavender reportedly identified up to 37,000 potential Hamas-linked targets, accelerating airstrikes but also contributing to significant civilian casualties.

Lushenko also addressed the concept of “minotaur warfare,” in which AI could assume greater control over combat operations, directing ground patrols, aerial dogfights, and naval engagements. He argues that this shift would require radical changes to military structures, including redefining command and control, creating new career fields, and reconsidering centralized versus decentralized operations.

This approach envisions AI as the central “brain” of military operations, analyzing battlefield data in real-time and issuing commands to both human and autonomous units with greater speed and precision than traditional methods. The term “Minotaur” suggests a hybrid model where AI and human forces work together, balancing automation with human oversight to improve military effectiveness. As AI integration in warfare continues to accelerate, the central question remains: how much decision-making should be entrusted to machines, and at what cost?

Not every AI model will be trained for every battlefield scenario, and AI will have its limitations. Ironically, the side that becomes overly dependent on AI-driven warfare may also expose itself to new vulnerabilities—ones that its adversary will inevitably learn to exploit.

Conclusion

AI is no longer a distant prospect on the battlefield—it is here, evolving in real-time. From last-mile targeting to strategic decision-making, AI is already reshaping the tempo and tools of war. The battlefield in Ukraine only accelerated this adoption and a future conflict in Taiwan will only increase the pace of change. The race is on to build coordinated swarms of autonomous systems to roam the skies, dominate the seas, and overwhelm the enemy with force. However, as the world races to build and deploy autonomous systems on the battlefield, we risk entering an Oppenheimer moment, one where we cross a line never crossed before, granting machines the power to decide who lives and who dies. Once we get there, there is no going back.

David Kirichenko is an Associate Research Fellow at the Henry Jackson Society, a London-based think tank. His work on warfare has been featured in the Atlantic Council, Center for European Policy Analysis, and the Modern Warfare Institute, among others. He can be found on Twitter/X @DVKirichenko.

The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, the U.S. Army, or the Department of Defense.

Photo Credit: Background photo by cell1-5 via Wikimedia Commons. Overlay generated in Gemini.

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend