Would a cyberattack that takes down the Texas power grid in the peak of summer be an act of war?
“There is no spoon.”
The 1999 film, The Matrix, held a critical lesson for its protagonist, Neo. One cannot change physical reality; one can change only oneself. The message came from a young boy who was apparently bending spoons telepathically, but as he hands an unbent spoon to Neo, he said, “Do not try to bend the spoon. It’s impossible. … There is no spoon.” For 21st century strategists, a spoon isn’t the object in question; it’s our understanding of the nature of war. If we reimagine the Matrix as a ‘gray zone’ where competitors threaten the United States below the level of armed conflict, then strategists must bend themselves to encompass a new and broader understanding of war’s nature, even though the nature of war does not and has not changed.
If war is fundamentally a violent clash of political wills, we must broaden our understanding of ‘violence’ to properly define 21st century war. Two problems confront us in this task, both reflect a tendency to privilege one part of the definition of violence over the other. One dictionary defines violence as a physical force, from either human activity or from interaction with tangible things, intended to injure, harm, damage, or kill. However, the classic perspectives on war that I learned at the War College – including Thucydides, Clausewitz, Sun Tzu, and many others – all emphasized war as an activity between humans. Weapons and other tangible objects were means to achieve human-defined ends. Cyberattacks and lethal autonomous weapons systems did not exist, but they do today and they raise important questions about the character of war now and into the future.
The second problem is overemphasis on force intended to kill—with only minimal attention on the intent to hurt or damage. The power to hurt, a hallmark of coercion a la Thomas Schelling, is clearly coercive, but war is understood as the realm of brute force. This unwillingness to understand this ‘gray zone’ between the physical and tangible, and harming, damaging, or killing as war, is one our adversaries will continue to exploit until we adjust our thinking and therefore our approach. For example, all three of the U.S.’ major military competitors have artificial intelligence (AI) development strategies and plans. China’s 2017 New Generation Artificial Intelligence Development Plan and its 2015 Made in China 2025 ambition both outline China’s AI development strategy. China’s defense executives openly acknowledge that autonomous weapon systems will rule the future battlefield — as early as 2025. China’s defense manufacturers already produce autonomous armed drones. Russian President Vladimir Putin similarly views AI as “humanity’s future.” Russia developed a 2019-21 roadmap for AI, which projected $719 million for AI research and development (R&D). Iran ranks number 16 in world AI R&D. Iranian universities began teaching AI 16 years ago. That country’s developments include three variants of military robots, for mine-sweeping, rescue, and other missions, along with add-on weapons’ capabilities. The question is, will these lethal, fully autonomous weapon systems (armed robots), detached from direct human control, rule the future battlefield?
The possibility that the answer is ‘yes’ has spurred action from senior U.S. military leaders. The U.S. Department of Defense (DoD) recently unveiled its Joint Enterprise Defense Infrastructure (JEDI) strategy, to guide its efforts to potentially weaponize AI. In June 2018, DoD established the Joint Artificial Intelligence Center (JAIC) to manage nearly 600 AI projects currently in development for $1.7 billion. The Defense Advanced Research Projects Agency (DARPA) plans to invest an additional $2 billion in AI research over the next five years. However, the absence of established international norms or regulations about using AI systems at war greases a ‘slippery slope,’ down a possible path to unintentional escalation into war.
What happens when robots acting in a ‘gray zone’ … get violent?
Fully autonomous weapons and cyber capabilities increase the likelihood that competition may escalate to war without human activity, with or without the loss of human life. The U.S. military views war as a continuum from cooperation to competition, to ultimately, war. What happens when robots acting in a ‘gray zone,’ that is, between competition and war, get violent? After the 2010 Stuxnet attack on Iran’s Natanz uranium enrichment plant, this question seems less farfetched. Fully autonomous weapons and cyberattacks are inevitable characteristics of future battlefields. How thin runs the line between competition and war without battlefield deaths to tally?
If one accepts that fully autonomous weapons and cyberattacks may be used for armed conflict, how might this armed conflict escalate to war? And here, we must be clear about what constitutes war. But this definitional work is contested, and the different definitions matter. For example, some scholars have suggested using battle-related deaths as parameters to determine whether a conflict counts as a war, establishing that 25 deaths per year means armed conflict, and over 1,000 means war. But what are the consequences of this definition in a world where deaths are less frequent – either intentionally or because of improvements in medical care and treatment? Would a cyberattack that takes down the Texas power grid in the peak of summer be an act of war? What if a competitor’s fully autonomous unmanned aerial vehicle (UAV) inadvertently identifies a fleet of commercial ships as an enemy’s military fleet, and this then sends a swarm attack on inbound merchant vessels? Would an ensuing and significant disruption to the U.S. economy be war? The United States must grapple with such questions soon, before events ever pose them.
British Army Officer Charles E. Callwell stated that, “theory cannot be accepted as conclusive when practice points the other way.” One must accept the irrefutable nature of war not because practice points the other way but because practice illuminates its full range. Violence in war consists of both a human and a non-human element. Wars have historically led to human deaths, but future wars may wreak only bloodless injuries or catastrophic property damages. Nations currently battle in the cyber realm and the future battlefield will include fully autonomous weapon systems. The country that emerges as the world leader in militarized AI technology and cyber capabilities will seize an edge on the battlefield. Now that the gray zone is a modern reality based on the nature of war, will leaders continue trying to ‘bend the spoon’ or will they bend their understanding towards its true nature?
Terrie Peterkin, PhD, is a colonel is the U.S. Army and graduate of the U.S. Army War College resident class of 2019. The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, U.S. Army, or Department of Defense.
Photo Credit: Anton Holoborodko (Антон Голобородько) via Wikimedia Commons under creative commons “share alike 3.0 unported” license
Photo: Military base at Perevalne during the 2014 Crimean crisis