November 21, 2024

It is time to make the threat posed by artificial intelligence a permanent part of national security and military strategy.

The “robot apocalypse,” in which a malicious artificial intelligence (AI) threatens humanity, is a common science fiction trope. The enduring appeal of AI in popular culture — from Isaac Asimov’s I, Robot book series, to HAL 9000 in 2001: A Space Odyssey, to the machine overlords of the Terminator and Matrix franchises — indicates deep-seated human recognition that technology has the potential to threaten human supremacy.

The technological changes of recent decades are moving this threat out of the realm of fiction and closer to reality. A 2008 survey of the Oxford Global Catastrophic Risk Conference estimated that super-intelligent AI was one of the most likely causes of human extinction over the next century, higher than an engineered pandemic or nuclear war. Although the survey was informal and the magnitude of the probabilities of the threats are impossible to know, the survey population was expert. That AI was so prominent in their threat awareness (in 2008, no less!) is highly instructive. It is time to make the threat posed by artificial intelligence a permanent part of national security and military strategy.

AI challenges conventional approaches to security threat assessment. First, the AI threat exists outside of the realm of our experience, and it could be dismissed because humanity has not yet dealt with a sentient machine with lethal capabilities and a motivation to destroy humans. Second, traditional sources of military power are not the focal point for advanced AI development. Given the widely-distributed character of AI research and development, from corporations, to universities, to startups, and individual coders, national governments and militaries are unlikely to fully understand current AI capabilities, much less the potential of such systems. The remainder of this essay explores the implications of these two characteristics of AI, and proposes five premises that should guide national security approaches to artificial intelligence. These premises are captured in the following statement: Super-intelligent artificial intelligence is realistic, and likely to be developed without limits, by entities outside of the military-industrial complex with key interests that are not aligned to those of the United States, in a manner that poses an existential threat not only to the nation, but also to humanity as a whole.

Premise 1: Super-intelligent AI is realistic

The “black swan” character of advanced AI makes reasoning about its security implications difficult. It may be easy to dismiss the threat of AI as fiction, yet the current state of science and technology suggests that such disregard a bad idea. Many measures of technological change continue to forecast exponential growth, as represented by Moore’s Law and similar observations. If computing power continues to grow exponentially, then in less than thirty years, a single desktop computer will probably have more processing power than all human brains in history combined.

Although a super-powerful computer will not be ipso facto super-intelligent, it is not difficult to imagine that extreme processing power could yield some degree of sentience or its mathematical equivalent. Theorists such as John von Neumann, Stephen Hawking, and Ray Kurzweil have described this development as the “technological singularity,” the point at which an artificial intelligence becomes sufficiently self-aware to improve itself recursively (i.e., bootstrapping its own code), therein spawning an accelerating chain reaction until a super-intelligence emerges.

Some observers question whether technological change will continue to grow exponentially. Others point out that technological retrogression is an immutable feature of history and will soon occur again. History also suggests that human agency can overcome the deterministic effects of technology. However, super-intelligent AI and the risks implied by its development are sufficiently credible that they have become the subject of serious scientific and philosophical inquiry. Additionally, the United Nations is interested in collective action to prevent destructive AI-related technologies.

Perhaps the strongest argument in favor of assuming that super-intelligent AI will emerge is the risk of being wrong if we assume that it will not. By the very mechanism of its development, such a super-intelligence would far surpass human intelligence, since biological evolution is constrained by genetic selection processes that, in humans, occur over vast timescales. The human condition in a world after the technological singularity is unpleasant to contemplate. A human race that has not prepared itself against the possibility of such a world will probably have no place in it.

Premise 2: AI will be developed without limits

More optimistic theorists suggest that super-intelligent AI can somehow be controlled; this is a logical fallacy, unfortunately. The technological singularity is an unintended event, by definition. Even if we could control the technological singularity and its results, it requires a goal set that is invariant under all future conditions. This entails godlike omniscience — both for goal definition, as well as prediction of the AI’s behavior — that is beyond human capabilities. Another form of a controllable singularity is the proverbial “AI in a box,” in which the singularity and resultant super-intelligence is somehow physically separated from humanity and technological networks. The 2015 film Ex Machina dramatically explores this problem, which is also a prominent part of the HBO series Westworld. Both works argue that human emotional and psychological weaknesses make us susceptible to manipulation by machines.

Accordingly, we should assume that super-intelligence can improve itself in ways that are incomprehensible to humans, and expect that it will find ways to bypass any human-imposed physical or digital barriers. In the parlance of military threat assessment, the most likely course of action vis-à-vis the technological singularity is also the most dangerous.

Premise 3: Advanced AI will be developed outside of the Military-Industrial Complex

We should also expect super-intelligent AI to arise first from a private sector entity in a commercial application. Although the U.S. government’s classified AI development activities are doubtless substantial, the aggregate AI capabilities of the private sector likely dwarf its program and those of other nations. Former President Obama’s own Committee on Technology conceded this point, stating in a recent report that “the private sector will be the main engine of progress on AI” moving forward.

Three factors affirm that advanced AI will likely emerge in a non-governmental, non-military setting. First, private entities have developmental scale that governments cannot match. Tech giants are investing heavily in artificial intelligence. IBM developed both Deep Blue, a chess playing computer that beat Russian chess grandmaster Garry Kasparov, and Watson, a question answering system that won the game show Jeopardy!, and has bet its future on AI. Moreover, Google’s parent company Alphabet has an experimental concepts subsidiary wholly dedicated to AI research, and expects machine learning to affect how all technology services will be created and delivered in the future.

Second, the private sector can combine organizational capabilities in ways that cannot be mimicked by the government. The U.S. software industry has built much of its success on preserving the rights of entrepreneurs to take their ideas from one competitive environment and apply them in another. The relatively free movement of ideas (supported by robust capital markets) is a tremendous driver of innovation, and this dynamic will support advanced AI development.

Finally, current-generation AI improves with use, such as building on deep-learning through neural networks. The more people use it, the better it gets, and the more likely it is to be used in adjacent applications. Google CEO Sundar Pichai said in late 2015:

“Machine learning is a core, transformative way by which we’re rethinking how we’re doing everything…We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play…We’re in the early days, but you’ll see us in a systematic way think about how we can apply machine learning to all these areas.”

Google, Apple, Facebook, Microsoft, and other firms provide products that huge numbers of Americans (and people all over the world) use constantly. The developmental capacity of these user platforms is enormous, making private sector AI research and development both efficient and effective.

Premise 4: Key interests of AI developers are not aligned with those of the state

Private sector AI research and development is also parochial (in the business sense), not patriotic. This is not to question the motivations of tech companies, but rather to provide an objective assessment of their place in the international system. The largest tech companies that are most heavily invested in AI development are transnational; their growing influence in the global economic order plus the worldwide distribution of their assets give them geopolitical relevance. Additionally, tech products such as smartphones and social media platforms provide individuals with informational and organizational power once held only by nation-states. Furthermore, “big data” is anticipated to become a commodity as valuable as oil, and Google and Amazon in particular are already dominating this emerging market.

the price of complacency is very high, just as it is for being wrong about nuclear deterrence, containing epidemics, or mitigating climate change.

The social and economic implications of the rapid pace of technological change are such that nation-state power is being redistributed, with tech companies best positioned to benefit as the most significant non-state actors moving forward. As non-state actors, tech companies will have non-state interests that include profit growth and shareholder value, the pursuit of which could challenge the neutrality of these companies’ products and/or public policy views. Washington and Silicon Valley were already in an “uneasy truce” after the Snowden revelations, and the relationship has only worsened over the most recent national election cycle. Silicon Valley’s political independence will continue to increase as its collective power increases. As such, it is highly unlikely that the United States could suppress foundational AI technologies, or otherwise direct the course of research and development away from super-intelligent AI.

Premise 5: AI development will occur in a way that poses an existential threat

Super-intelligent AI poses an existential threat to humanity. The Oxford survey cited at the beginning of this paper was not unduly alarmist — human extinction is a possibility, if not yet a probability. Notwithstanding the “killer robots” trope, human extinction by a super-intelligent AI does not require malice; in fact, a sentient machine’s lack of moral agency provides at least two pathways to human extinction.

The first involves unintended resource competition. Even if a super-intelligent AI has a seemingly benign and simple goal, the way it interprets the goal and how it undertakes its attainment could imperil humanity. Entrepreneur and inventor Elon Musk humorously described a super-intelligent strawberry picking robot as being motivated to create (with credit to The Beatles’ song) “strawberry fields forever.” Literally forever, which could spell doom for the AI’s human (or other) competitors in a shared ecosystem.

The second pathway involves self-protection, which is inherent to any sentient system, whether organic or not. Just as the imposition of an invariant goal set would require godlike omniscience, so too would controlling a super-intelligent AI’s interpretation of every interaction it had with any human. If just one — one! —interaction were interpreted by a sentient machine as humanity posing a risk to the machine’s survival, then theoretically it would be “game on” for the worst machine war scenarios imagined in science fiction. In this context, no matter how unsettling a super-intelligent AI may be, it will probably be best not to try and unplug it.

Perhaps this is ridiculous speculation. But the price of misplaced complacency is very high, just as it is for being wrong about nuclear deterrence, containing epidemics, or mitigating climate change. These latter topics are significant elements of current approaches to national security strategy. Artificial intelligence needs to take on a similar significance.

The role of strategy is to leverage ways and means in the context of national instruments of power to achieve ends that are framed by national interests. When faced with an existential threat, these interests are simple — defend American life, liberty, and property. To protect ourselves from a super-intelligent AI and its agenda, we need a strategy that: defines the security risks of AI development; shapes and controls AI applications as much as possible; and includes effective hedges against the emergence of uncontrolled, hostile AI. This is only the beginning of a crucially important conversation.

Patrick Sullivan is a lieutenant colonel in the U.S. Army and a member of the U.S. Army War College resident class of 2017. The views in this article are the author’s and do not necessarily reflect those of the U.S. Army or U.S. Government.

Photo credit: BEN STANSALL/AFP/Getty Images

1 thought on “A.I. & THE ART OF (MACHINE) WAR

  1. Fascinating article!

    Of your five premises, we would most likely acknowledge some overlap, but perhaps we might also identify at least one gap in the selection of premises (the generation gap). Perhaps a sixth premise might be: Premise 6: The Net Generation puts the iWorld into AI.

    According to Don Tapscott (2009), in his book Grown up Digital: How the Net Generation is Changing Your World, “For the first time in history, children are more comfortable, knowledgeable, and literate than their parents with an innovation central to society” (p. 2). Innovation has become part of daily life for the net generation (also called the Millennials).

    Fortunately, the Army is not being left behind. Cadet Eugene Alvey, West Point Class of 2017, recently won the Innovation for Soldiers Award for his “Alvey Combat Ball” – not precisely an AI system, but Alvey’s innovation is a 3-D camera ball with a 360-degree view for clearing a room. Like a “smart grenade,” Soldiers throw the ball into a room and receive a live video feed on their mobile device. As the saying goes, this is not your grandfather’s Army!

    Proving Tapscott’s “innovation central” point, 18 teachers and 63 students (including one of my granddaughters) assembled at the White House last year to receive annual President’s Environmental Youth Award (PEYA) awards for 12 amazing regional environmental innovations. These innovations were not necessarily “artificial” intelligence, but I personally prefer the synonym “expert system.” Not every device demonstrates “technological singularity,” but the way young students save sea turtles from plastic ocean litter or vacuum up sea waste in rivers and streams is perhaps even more impressive than winning at chess or jeopardy (at least solving physical combat or environment problems is a useful, life-saving venture).

    The Google search engine is not technically an AI system, but unlike earlier search engines, the Google algorithm learns as it searches. Combat robots (“robats”) can examine a potential improvised explosive device (IED) and keep a soldier (male or female) a bit more remote from harm’s way. Hooah for that!

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend