December 21, 2024

Right now, AI programs struggle to beat capable human adversaries, but it is only a matter of time

During maneuvers, the 5-20 Infantry battalion staff begins planning an attack on Objective MURRAY, a complex piece of terrain surrounded by forests on three sides with an expected enemy company in the area. The approach to the objective is equally complex, with rolling hills, patches of forest, and a fordable stream. The enemy company could be anywhere along the approach or on the objective–or behind the objective ready to counterattack. The intelligence officer, CPT Tran, looks over the terrain on a map and opens the Enemy Course of Action Analysis Tool (ECAAT), artificially intelligent software that uses a wide variety of variables to develop potential enemy actions and suggest their likelihood. CPT Tran zooms into the area of operations on the program’s map, using his mouse to define the battalion area of operations, inputs the expected enemy force, and selects “defend” as the enemy mission. The AI instantly arrays the enemy in bright red silhouettes in its most likely course of action, with dimmer red silhouettes representing less likely arrays of forces. Tran reviews the analysis carefully, exploring the three most likely AI-generated enemy courses of action.

In the most likely case, ECAAT put three enemy tanks in a position where their fields of fire were only 1200 meters compared to another position that would give them 2100 meters. Why? CPT Tran examined the terrain again and then noticed the weather forecast called for dense fog. That was the answer. The tanks would not be able to engage more than 1000 meters regardless of the terrain, making the ECAAT’s template the more likely of the two possibilities. Tran did not always agree with the AI, but every AI prediction was worth consideration. It was always logical, and it always helped him better understand the situation. As the planning continued, he would update ECAAT with any new intelligence on enemy locations and the ECAAT would immediately revise its enemy course of action. Once the battalion was on the attack, the AI was indispensable. Intelligence and contact reports would come very fast, and the AI’s ability to instantly analyze the enemy was invaluable. After the action was completed, CPT Tran would send information on the engagement to the ECAAT global database for inclusion in the AI’s learning data set. The more ECAAT was used, the better it got. CPT Tran could not believe that just a few years ago the staff had done all this work manually, with only maps and manuals.

The Problem

The preceding vignette is fiction. The U.S. Army does not have an automated enemy analysis tool at the tactical level. When maneuver battalion staffs plan operations, they manually analyze terrain and weather to predict enemy courses of action, considering how an enemy commander could most effectively fight. Staffs plan their own friendly course of action against this analysis. The process works much the same as it did 30 years ago. Staffs today have more intelligence products (imagery, UAS video, etc.), and computers help display enemy and friendly courses of action, but no enemy analysis tool analyzes terrain, weather, and enemy weapons and creates an optimal enemy course of action. Given recent breakthroughs in artificial intelligence—specifically the ability to win at abstract strategy games, this type of tool is now feasible. The Army needs it.

Computers mastered chess in the late 1990s, but it took until 2016 for AlphaGo, a program developed by Google DeepMind, for computers to beat world champions at the board game Go. Go has infinitely more possible moves at each turn than chess (chess has an average of 35 while Go averages 250). AlphaGo’s victory over Lee Sedol was a remarkable breakthrough. Go is a complex game that requires intuition and strategic thinking. Programmers designed AlphaGo to learn from tens of millions of past Go matches, a technique known as machine learning.

The next challenge for researchers is to design an AI that can defeat world champions at computer strategy games. Such games feature an enormous number of possible moves and the environment changes constantly. In board games like Go and chess, players can see the main board and take turns. Computer strategy games are typically real-time and require long-term strategies and have countless more possible moves than Go. Right now, AI programs struggle to beat capable human adversaries, but it is only a matter of time—and probably not much time—before they gain mastery. Researchers point to the need for more widely available training data to improve AI capabilities in these games, and they have already made substantial progress. The research company OpenAI developed an AI that defeated humans at Dota 2, a popular computer strategy game, and Deepmind is developing a similar tool to play StarCraft II.

Developing an AI enemy analysis tool will be more similar to playing computer strategy games versus playing Go because of the complexity. AI would have to understand how to best employ weapons in all types of terrain, how weather affects operations, how to synchronize effects, how to use deception, how to resupply, and more. In addition to these complicated concepts, the battlefield environment also changes constantly. Nevertheless, it is only a matter of time before AI can do all of this effectively.

The goal of an AI enemy course of action tool would be to determine the range of most likely enemy courses of action based on certain inputs, assigning each action a range of probability that would be updated based on a continuing stream of data inputs.

There are basically two ways to develop such a tool. First, like AlphaGo, an AI can learn from historical data sets to predict enemy courses of action. The second approach is based on reinforcement learning, which is how AlphaGo Zero, a more powerful AI than AlphaGo, mastered Go in days. Reinforcement learning trains without data sets. The AI is programmed with the rules of the game and then plays itself millions of times to master it. For reinforcement learning to work, the AI requires a game or a model.

Various efforts are underway in the U.S. Army Training and Doctrine Command (TRADOC) to begin development of an enemy course of action tool.

Historical Data Approaches

The National Training Center (NTC) collects and archives an enormous amount of data during training rotations. The data captures the basic elements of the exercise: when vehicles move and shoot, when they are hit, and how notional elements (such as aircraft and artillery) affect the battle.

Through machine learning, a computer could learn to fight effectively at the NTC. It could learn where vehicles are destroyed (in open terrain at low elevation, for example), where they most effectively destroy enemy vehicles (perhaps behind cover at higher elevation), and where they are most susceptible to enemy artillery.

There are challenges with using NTC data to train AI. Only vehicle data is captured—the NTC does not track dismounted soldiers. The data knows when vehicles are hit by dismounted enemy weapon systems, but it does not know where from. Also, there may not be enough data for a computer to learn. TRADOC has approximately fifty training rotations worth of data, but for a computer to learn effectively, programmers usually need more. One solution is to train the AI at a lower level first. For example, the fifty Brigade-size rotations feature many more company-level engagements.

One Semi-Automated Forces (OneSAF) is a constructive simulation with high fidelity terrain that models units from fire team to battalion level. Using archived iterations as training sets, a machine could learn how to win at OneSAF. If successful, the machine could become the opposing force for simulations or could be used as an enemy analysis tool. For example, a battalion in the field could have OneSAF loaded on a computer use it to produce enemy courses of actions. OneSAF has limitations, too. An old simulation, it may not be cost effective to create a sophisticated AI for an aging system. OneSAF is also notoriously difficult to use—users require significant training to operate it.

Although the U.S. Army has made significant progress with robotic and autonomous systems, AI-enabled planning and control systems are lagging far behind

WARSIM is a constructive simulation system fielded in 2004 designed for brigade through echelons above corps. Army simulation centers use WARSIM extensively. Unlike OneSAF, WARSIM does not use high fidelity terrain and does not model individual weapon systems—its focus is unit on unit combat. As an enemy course of action tool, WARSIM could only be used above the brigade level. Like OneSAF, it is an old system and might not be cost effective to develop an AI for it.

Reinforcement Learning Approaches

To enable reinforcement learning, programmers have to first create an AI environment. An AI environment modifies a game so that the data streams directly to the AI player. For example, a human plays a game by watching a screen and typing instructions or using a mouse. An AI takes its input directly from the game in the form of data—it does not see a screen. To create an AI environment, programmers modify the game so that it can be played by an AI. Once an AI environment is created, programmers develop the AI player.

Using reinforcement learning similar to OpenAI or AlphaGo Zero, an AI could learn to play OneSAF. Like the civilian programs, the AI would play OneSAF millions of times to master it. Through trial and error of playing itself, it would learn what works and what does not. This would require fewer training data sets because it would learn by playing itself.

There are several challenges to this. First, designing an AI player is cutting edge technology that is still being developed by the most prominent high-tech companies. OneSAF AIs cannot simply be purchased on the market—it would have to be developed by researchers. Also, it is more difficult for an AI to master OneSAF as compared to Go because there are no rules with clear rewards and punishments in the game—this would have to be added creatively by rewarding the computer for destroying opposing forces or seizing key terrain. Moreover, playing OneSAF is very difficult. It is not an intuitive, easy-to-use simulation. Experienced soldiers normally require 40 hours of training to learn to operate it.

Despite the difficulties, this technology would be a worthy investment. In addition to being an enemy analysis tool, an AI OneSAF player could help develop friendly courses of action and replace the expensive contractors who play the opposing force during simulations. Meanwhile, other options are potentially available and should be considered.

For example, Flashpoint Campaigns is a civilian game that simulates modern ground combat. Dr. Benjamin Jensen at U.S. Marine Corps University is developing a decision support tool using the game as a model. An AI player could be developed for this game. The game is new compared to OneSAF and WARSIM and sufficiently replicates ground combat, but has not been officially approved for Army simulations. However, creating an AI with reinforcement learning would be a useful proof of principle. The Division Exercise Training and Review System (DXTRS) is the official Army counterpart to Flashpoint Campaigns. It is an easy-to use, low overhead simulation meant to be used outside of simulation centers. An AI player could be developed for this game as well.

The Way Ahead

The Army must use existing data and simulated environments to create tools that integrate AI more thoroughly into the development of command decisions. Although the U.S. Army has made significant progress with robotic and autonomous systems, AI-enabled planning and control systems are lagging far behind. To some extent, this may reflect deeply-held assumptions about the role of humans in warfare. Military professionals, like the members of any field of expertise threatened by increasingly capable AI, wish to preserve their own supremacy. Another reason may be challenges with developing cutting edge technology through the military’s lethargic acquisition process. Yet the future is coming whether we like it or not. AI-based enemy analysis tools are possible now, and these represent a logical starting point for learning how to work with and exploit AI systems in commanding and controlling both human and robotic forces in war. Used across the army, such systems would foster the development of AI in a wide variety of applications. AI will be integral in the warfare of a future that is not far off. To be a leader in that future, the Army must act with urgency now.

 

Edward Ballanco is a graduate of the U.S. Army War College class of 2018. The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, U.S. Army, or Department of Defense.

Photo: South Korea’s Lee Sedol, the world’s top Go player, bows during a news conference ahead of a five-game match against Google’s artificial intelligence program AlphaGo, in Seoul, South Korea, March 8, 2016. AlphaGo would win the series four games to one.

Photo Credit: REUTERS/Kim Hong-Ji

2 thoughts on “WE NEED AN AI-BASED ENEMY ANALYSIS TOOL … NOW!

  1. Incorrect link to the game Flashpoint Campaigns. The link references Operation Flashpoint.

  2. Getting an AI to master, or at least become very good at, Starcraft II or other large scale strategy games is great, along with seeing how it operates with learning Chess and Go is great, but what about other complex games? Catan and Scythe are two games heavily based on resource management. You have to acquire resources, build forces and maneuver around the board and other forces. Logistics and movement. Each round/turn presenting different options and situations as the games progresses. Both games have a base level and a variety of expansions which would allow the complexity to increase as desired.

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend