November 23, 2024

Fear of an artificial intelligence disadvantage is pushing the U.S. and other powers into another arms race

Strategic leaders have wrestled with the use of various types of weapons and their employment throughout history. The crossbow, cannons, snipers, landmines, submarines, bombers, and many other weapons provoked episodes of moral reflection and angst. The U.S. developed nuclear weapons despite the significant moral concerns those weapons raised, and then employed them – a decision that remains controversial to this day.

Early in the Cold War, General Omar Bradley, then Chief of Staff of the Army, delivered an Armistice Day speech in Boston, Massachusetts on November 10, 1948. In his address, he drew attention to the tension inherent in developing and possessing the weapons of devastating destruction that had brought World War II to a close:

With the monstrous weapons man already has, humanity is in danger of being trapped in this world by its moral adolescents. Our knowledge of science has clearly outstripped our capacity to control it. We have many men of science; too few men of God. We have grasped the mystery of the atom and rejected the Sermon on the Mount… The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants.

Nuclear weapons challenged the ethical constructs of politicians and military leaders as no weapons had before. Reflecting on that era, a 2017 report from Harvard University’s Belfer Center on Artificial Intelligence and National Security noted, “the transformative implications of nuclear weapons technology, combined with the Cold War context, led the U.S. government to consider some extraordinary policy measures.”

Now, a new technology may prove every bit as challenging to the ethical frameworks that we have developed to guide our choices in war and peace. The race to become the world leader in artificial intelligence (AI) is underway. AI’s rapid development and its applications in autonomous weapons and other enabling systems places the world in an analogous situation today. Leading nations are seeking to balance advancing technological development and ethical principles with strategy formulation.

Although lethal autonomous weapons systems (LAWS) have obvious appeal, they also evoke fears of killer robots. Despite its eloquence, the prevailing focus on the ethics of such machines and the need to prevent their development – epitomized by the thoughtful work of experts such as Paul Scharre – runs against everything we know of the history of technology in war. From the first man who used a weapon (instead of his fists) to kill another, to the explosion of Little Boy 2,000 feet above Hiroshima, war and peace have witnessed a continual struggle between three forces: technological progress, strategic control, and the moral and ethical governance of conflict.

Think of these forces as a triangle of triple constraint, one that describes the decision space for strategic leaders. Triple Constraint, also known as an “iron triangle of painful tradeoffs” comes from the project management “triangle” of scope, time, and cost – an actor cannot optimize any of the three without tradeoffs. These three factors compete for priority, creating an unresolvable tension. (That is, you cannot achieve the optimum value for all three constraints – and there are limits.)

The “iron triangle” of lethal autonomous weapons includes three factors. First, technological progress describes the resources and pressures for research and development of weapons technologies. Second, ethical acceptability describes the degree to which an actor finds satisfaction in the use of a weapons in light of moral and ethical norms. The third constraint is strategy coherence, describing how well an actor aligns a weapon across the ends, ways, and means of military strategy. Strategic decision-makers operate in this space defined by the limits of morality, available and emerging technology, and the strategy formulated by those same decisionmakers to achieve specified goals.

Yes, LAWS raise multiple ethical questions, many of which are irresolvable to acceptable levels of satisfaction. Yet this is not a new problem, and – just as in the past – these concerns will not be sufficient to restrain development. Right now, the U.S. approach to lethal autonomous weapons needs a dose of pragmatism.

A strategy to guide the development, procurement, and employment of autonomous systems must consider the eventual employment of lethal autonomous weapons, balancing the capabilities these technologies provide, bounded by moral principles, to meet the threat against U.S. national interests.

Weaponized AI is a specific, immediate concern for the national security establishment – more urgent than concerns about a dystopian future. As of mid-2018, 26 countries had called for the prohibition of LAWS. However, there is no consensus on proposals for a ban among the U.S. and its partners and allies, while adversaries are developing AI that could be employed in combat. China is typically “ambiguous” with regards to its military intentions with AI, but its leaders have vowed to make China the world leader in AI by 2025. Russia is likewise uninterested in international prohibitions on lethal autonomous weapons, and has employed semi-autonomous systems in combat in Syria. Indeed, fully autonomous systems may have already been employed in combat. The U.S. already faces substantial competition in the development of AI and LAWS. Thus, fear of an AI disadvantage is pushing the U.S. and other powers into another arms race, with its accompanying pressures for technological progress.

This pressure is provoking much concern about the ethics of killing machines. In such discussions, we tend to focus on the degree of human control necessary to justly prosecute the violence of war. The use of autonomous systems appears to introduce moral friction into the targeting process. Commanders may delegate to a machine the decision to employ lethal means on human targets. Superficially, this seems scary. Yet what is the distinction between human control and human supervision?

In war, humans are prepared to kill, if they must, in service to others. Even if killing is the best decision, given the circumstances, it is not without cost. The U.S. has regularly sought to prudently navigate the space between the ways and means employed in pursuit of national interest in an environment characterized by this moral tension. Ultimately, a commander who employs any weapon system of any type, making “kill decisions,” is accountable for the effects. At the same time, people instinctively seem to understand that “war is far more than a mere targeting drill.”

Emerging technology continues to redefine what is possible – ethics and strategy must keep pace

Currently, the DoD does not authorize the use of LAWS against human targets. DoD Directive 3000.09 specifies that autonomous and semi-autonomous weapons systems “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Fully autonomous weapons may only be employed in very limited circumstances. Additionally, the DoD differentiates between “human control” (or, “in-the-loop”) and “human supervision” (“on-the-loop”) of semi-autonomous and autonomous weapons systems. But how long will these policies endure in the face of more capable adversaries and the quickening pace of combat? What might cause the policy to change?

Notwithstanding the understandable emotional response to the idea of machines killing humans, this author views LAWS as weapons systems ultimately employed by human actors. If a machine independently identifies and engages a target, killing a human, the basis for the act is a human decision to employ the LAWS. Machine learning may change the conversation, but only by degree. It remains a human decision to create and employ the machines of war. Lethal autonomous weapons, whether they employ machine learning capability or not, do not fundamentally introduce a new dilemma into the moral tension that exists inherently in war. If one does not perceive a new threat to the dignity of human life (or that “killer robots” will usher in a dystopian future where robots rule the world), then the accepted norms of jus in bello apply with no new peculiarities.

There are key distinctions between LAWS and other voluntarily withheld weapons technologies (e.g. chemical weapons, napalm, flamethrowers). Lethal autonomous weapons would not be voluntarily limited or withheld because they inflict cruel or undue suffering on their targets, but despite the fact that they might in some ways cause less suffering, fewer unintended non-combatant deaths, or collateral damage. This tension describes the new decision space occupied by strategic leaders in the defense enterprise.

While I argued above that the development of lethal autonomous weapons will be governed by the triple constraints of technological progress, ethical acceptability, and strategic coherence, there are limitations to this. In geometry, the sum of the three interior angles is a constant. The LAWS “iron triangle” is not a perfectly bounded system or a zero-sum game. Although the model describes a tendency for trade-offs between the constraints, an actor can expand its decision space, such as maintaining an acceptable ethical position and continue to expand available technology, all the while refining the ways and means of their strategy. Nor is strategy necessarily pitted against ethics. The goals (or, ends) of any strategy are never detached from the values, beliefs, and ethics of the society, and these may change.

None of the constraints can be considered untethered from the others, lest the strategy violate the values that shape the very national interests it seeks to promote. This is the basic tension involved in any principled ethical framework. The DoD’s AI Strategy echoes the National Defense Strategy’s aim to deter war, but win if war proves necessary. Effective deterrence has three components: (1) calibrated means to attack an adversary, if provoked; (2) the will to employ those means against an adversary; and (3) the clear communication of that will to a prospective opponent. For a deterrence strategy to control LAWS, it must include those elements.

Emerging technology continues to redefine what is possible – ethics and strategy must keep pace. Writing nearly 60 years ago, Bernard and Fawn Brodie observed, “Today we are confronted with a special situation. The choice of strategies and weapons systems is not only immensely more difficult than it has ever been before, but also involves questions that are deeply and essentially baffling, even to the ablest minds.”

The objections of well-intentioned, thoughtful people have not slowed the development of increasingly lethal weapons of war. The most recent Nuclear Posture Statement shows how we continue to live with this tension, as the U.S. invests heavily in modernization of the nuclear arsenal while the stated goal remains global nuclear disarmament. The report laments, “We must look reality in the eye and see the world as it is, not as we wish it to be.”

Paradoxically, the aversion to LAWS must remain even as the future battlefield would seem to necessitate that the U.S. consider their employment. We remain stuck in the race that General Bradley described in 1948: our ability to control weapons is desperately trying to keep up with the advance of those weapons. Those who would take on the mantle of responsibility for national security operate in an environment in which the problems are complex and there are no easy answers. Yet, some answers are better than others.

The U.S. must endeavor to keep pace morally and strategically with the technological development of AI and weaponized autonomous systems. The best approach is one that accepts an inherently dissatisfying balance between technology, ethics, and strategy, and accepts the essential role of each in ensuring that the U.S. be both a strong and a righteous actor in world affairs.

 

Jacob Scott is a U.S. Army Chaplain serving in the Oregon National Guard. The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, U.S. Army, or Department of Defense.

Photo: The Modular Advanced Armed Robotic System, a product of QinetiQ North America, was one of four unmanned ground vehicles that demonstrated lethal applications Thursday during a live fire at Red Cloud Range.

Photo Credit: Army photo by Patrick A. Albright, public domain

1 thought on “THE IRON TRIANGLE: TECHNOLOGY, STRATEGY, ETHICS, AND THE FUTURE OF KILLING MACHINES

  1. Hello Chaplain Scott,

    Excellent though-provoking article!

    In 1957, after the launch of Sputnik, the U.S. increased academic emphasis on science and math and the U.S., in 1969, was the first nation to put a human on the moon (an initial victory in the space race). IBM has been a leader in Artificial Intelligence (which I like to call “Automated” Intelligence) with Big Blue wins in Chess, Go, and Jeopardy. The U.S lead in Weaponized IA, as you point out, may be shifting to China. China, with leading company Huawei, seems to be taking the Fifth Generation (5G) global lead along with the AI lead. Ironically, as China shifts focus from being the number one factory culture in the world to being a leader in Information Technology, the U.S. seems to be longing for a return of factory dominance rather than maintaining and moving focus forward with the Information Age.

    The lethal autonomous weapons systems (LAWS) technology lead garners dubious distinction. The Geneva Conventions are gradually becoming dated and, as you point out, Russia is apparently “uninterested in international prohibitions.” With the current White House administration, the U.S. seems to agree with Russia regarding “international prohibitions.” Perhaps LAWS should also be refashioned to mean “lethal ‘automated’ weapons systems.” Creating “artificial” or “autonomous” intelligence creates an ethical and practical conundrum.

    The iron triangle of technological progress, ethical perspective, and strategic coherence presents a trifecta of challenges. Eisenhower’s military-industrial complex needs to keep moving ahead rather than risking falling behind. Ethics involves the triple-bottom line of people, profits, and planet (profits do not drive the military, but profitable industry helps to drive and modernize the military). Strategic coherence as you point out involves a balancing of Ends, Ways, and Means. In addition to STEM (science, technology, Engineering, and math), the U.S. military needs to go full STEAM ahead (adding Arts to the STEM equation). Strategy, diplomacy, tactics, ethics, and leadership are a few of the Arts we must maintain and move forward (lest we fall behind).

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend