[The U.S.] must suppress its appetite to rely on Cold War-era hardware
The Division Commander, Major General (MG) Smith, was pleased. His lead armor brigade was making steady progress in the center and his second brigade had positioned itself on the flank of the Chinese People’s Liberation Army (PLA) armored division opposite his. MG Smith’s division was in regular contact, but the Chinese tanks were no match for the American M1 Abrams with support from a battalion of AH-64 Apache attack helicopters. Unknown to MG Smith, the PLA had integrated a command system that allowed its control center to observe the U.S. formation, analyze its disposition, and execute synchronized effects at a speed previously unseen.
It all happened at once. Ballistic missiles struck the airfield where MG Smith’s Apaches were located and surface-to-air missiles destroyed four of the six remaining Apaches that were airborne. Barrage rocket fire struck his main effort with thermobaric explosives, turning it into a parking lot of smoking hulks. His second brigade drove into a perfectly placed minefield. Finally, a raid with Chinese CAIC Z-10 attack helicopters found a seam in the division’s air defense and was ripping apart his division headquarters.
MG Smith reported the disaster to his corps commander: “It happened so fast! It was as if they had intelligence on all our priority systems and baited us into a perfect ambush. We are combat ineffective.”
This fictional account portrays the fruition of the vast investment the PLA is currently making in AI. The outcome is avoidable, but only if the United States overcomes the “Skynet” effect, referring to the AI system that took over the world in the 1984 movie, Terminator and which has scared the U.S. military out of aggressively funding anything resembling autonomous AI. Yet the speed at which a computer can observe, orient, decide, and act (known in military circles as the OODA loop) far outmatches the capability of the human mind. Keeping a person in the middle of decision-making—a “human in the loop”— slows the process enough to give adversaries a potentially decisive advantage.
China, on the other hand, is not hesitating. In March 2016, China had its “Sputnik moment” and realized the future potential of AI. At that time, the AI program AlphaGo defeated the 18-time world champion Lee Sedol in four out of five games in the ancient Chinese board game Go. When AlphaGo defeated Lee Sedol, it calculated probabilities of hundreds of courses of action—50 moves out. China embraced AI and began to build the foundation for its AI-enabled empire. Its military was a priority recipient. In 2017, the Chinese released its AI development plan that detailed its intention to spend the equivalent of $150 Billion in American dollars on AI by 2020, and set a goal to be the world leader in AI by 2030. It may just get there: communist China’s government is both willing to invest and can efficiently direct efforts. Its commercial and military sectors work hand-in-hand on this project. At Chinese universities, the commercial and military AI developers share the same labs, ideas, and funding. China is putting its money and its best minds toward developing AI capabilities for its military.
Compare this to the U.S., where some companies refuse to work with the military, and the government pursues larger long-term acquisition programs for funding far more than emerging innovative solutions. The Fiscal Year 2019 National Defense Authorization Act adds $1.2 billion for science and technology to advance hypersonic weapons, artificial intelligence, space constellation efforts, cybersecurity, and directed energy, among several more “high priority emerging technologies.” This act is the archetype of the adage, “if everything is a priority, then nothing is a priority.” In doing the math, it’s clear AI systems’ development and research funding still flounders in the low millions. Compare this to what the U.S. plans to spend on conventional systems: M1 Abrams tanks receive $1.52 billion for 135 vehicles, Virginia Class Submarines get $7.4 billion for just two subs, and the Navy still argues that it needs more aircraft carriers at $13 billion apiece. Of course, when a crisis arises in Iran or Korea, an aircraft carrier moving to the region sends a message. Yet, the Joint Force must both fight today’s wars and prepare for the future, and its budget is not structured to meet tomorrow’s challenges. Perhaps a look at U.S. history offers the warning we need to get serious about AI.
Historic examples inform what the U.S. should and should not do in integrating AI. A good parallel to the mental and cultural closed-mindedness towards AI is the debate over unrestricted submarine warfare during the interwar period. After the horrors of subs striking and sinking passenger liners in World War I, members of the 1921 Washington Naval Conference attempted to ban submarines entirely. Their efforts failed, but the anti-sub contingent succeeded at establishing restrictive—and impractical—rules. Submarines had to surface then board and search civilian liners to confirm the presence of weapons before sinking the ship. Of course, a submarine surfacing and revealing its location is as good as dead in the water. In 1941, as the potential for war increased, the national command granted U.S. Navy commanders approval to announce trade exclusion zones, meaning subs could attack and sink any vessels in an announced area without warning. Better, but still impractical because of the ease of avoiding publicly-announced zones. Finally, in the hours after the Japanese attacked Pearl Harbor, the U.S. government authorized unrestricted submarine warfare.
Because of these restrictions, the U.S Navy hadn’t adequately tested newly developed torpedo technology. For the first year and a half of the war in the Pacific, U.S. torpedoes continually bounced off the sides of Japanese vessels without exploding, making for a very bad day for the crew of the sub that fired the missiles. Fortunately, the U.S. had time—provided by the expansive Pacific Ocean—for the Navy to perfect the technology necessary to win over Japan. It is unlikely that the U.S. will have the luxury of time in the future.
[T]he 1984 movie Terminator … scared the U.S. military out of aggressively funding anything resembling autonomous AI
In order to keep up with its competition, the U.S. must overcome its fear of, and increase its investments in, AI research on military capabilities. It must suppress its appetite to rely on Cold War-era hardware, and start analyzing how it will fight the next major war, which will likely be an artificial-intelligence-enabled conflict. It must explore future AI systems, and then test them in training. Fortunately, there is historical precedent for this too.
While the U.S. failure to develop submarine technologies before WWII helps us understand what not to do, the Navy’s use of wargaming during the interwar period provides an example of what to do. Between the World Wars, the U.S. Navy wargamed how to fight with technologies it anticipated would exist in the future. Its War College in Newport, Rhode Island led wargames that employed not-yet-developed capabilities, which the Navy later followed with realistic exercises. For example, the Navy increased the range of battleship guns, the size of aircraft carriers, and the range of planes flying off carrier decks in its games. In the mid-1920s, it exercised its simulated attack patterns on the Panama Canal, from a carrier too far off the coast to range the target with shipborne aircraft of that day and with more planes than the small carriers in its inventory could carry. Instead of flying off the carrier, the Navy flew attack aircraft from nearby bases in California. These wargames and exercises trained commanders on scenarios with future capabilities they would likely have. After World War II, Fleet Admiral Chester Nimitz gave great credit to the role of wargames in the Navy’s preparation, claiming the Navy had fought these battles over 300 times and as a result, “nothing that happened in the Pacific was strange or unexpected.”
While the pre-WWII wargames succeeded, we can learn from their shortfalls as well. The simulations conducted at the Navy War College were correct to assume increases in the ranges of ships, naval gunfire, and aircraft. Yet Navy culture, rooted in studies of great battles from history—Jutland in particular—placed unbreakable faith in the value of the battleship, even in the face of strong evidence that the aircraft carrier would rule the next conflict. Leaders set wargame parameters that exaggerated the capabilities of the battleship’s guns and air defenses and underestimated the lethality of torpedoes and planes. The Navy did not overcome its bias toward the battleship until after Pearl Harbor, at which point it rushed to build aircraft carriers and torpedoes.
The U.S. military today should be aggressively wargaming future AI capabilities, then following up with realistic training. These exercises should assume capabilities that experts have evidence will exist, including AI for command and control that, like Alpha-Go, can explore hundreds of different courses of action and execute decisions based upon probabilities of success. Leaders overseeing wargames must be aware of the biases formed by their experiences and study, and focus on futuristic capabilities that will change how they fight. Ironically, the Navy may find that the aircraft carrier will not be its most important platform in the next conflict because of advances in long-range precision-guided missiles. Likewise, commanders may need to delegate many decisions to artificially intelligent computers to match its adversaries’ speed of decision-making. There are, of course, moral implications to delegating some of these combat decisions to a computer, but testing them in wargames will help the military better frame these ethical dilemmas. Further, simulating future technologies then capturing the results could build the body of evidence necessary to convince policymakers to fund new capabilities, and to help prepare American leaders for the realities of future war.
Unlike warfare in the mid-20th century, the U.S. will not have time to catch up if it ignores the indicators of how technologies are changing the character of war. Its adversaries are marshalling their resources and aggressively experimenting with artificial intelligence. Let’s not let an attack by an adversary’s AI become another “Sputnik moment” for America.
Tom Spahr is U.S. Army officer and a graduate of the U.S. Army War College resident class of 2019. The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, U.S. Army, or Department of Defense.
Photo: A 1,000-foot wall of fire explodes below the F-22 Raptor during a high-speed pass maneuver at the “Mission Over Malmstrom” open house event in Great Falls, Montana in July 2019. The pyrotechnics are used to simulate live ordnance and the air-to-ground capabilities of the Raptor. The two-day event featured performances by aerial demonstration teams, flyovers and static displays.
Photo Credit: U.S. Air Force photo by 2nd Lt. Samuel Eckholm