Andrew Hill and Steve Gerras return to the studio with host Tom Spahr to further explore the role of artificial intelligence (AI) in national defense. This episode delves into the critical question of human oversight in lethal force decisions with AI assistance. Join the debate as they dissect the potential consequences of over-reliance on human intuition and the bottlenecks human intervention can create in the process. They emphasize the need for transparency and open dialogue about AI’s role in warfare. Steve and Andrew urge listeners to confront their own assumptions and engage in this crucial conversation. It’s a great wrap-up to the topic and companion to their compelling three-part article.
If we slow the AI to human speed, we’re going to lose.
Podcast: Download
Andrew Hill is Professor of Strategic Management in the Department of Command, Leadership, and Management (DCLM) at the U.S. Army War College. Prior to rejoining the War College in 2023, Dr. Hill was the inaugural director of Lehigh Ventures Lab, a startup incubator and accelerator at Lehigh University. From 2011-2019, Dr. Hill was a member of the faculty at the U.S. Army War College. In 2017, he was appointed as the inaugural U.S. Army War College Foundation Chair of Strategic Leadership. Dr. Hill is also the founder and former Director of the Carlisle Scholars Program, and the founder and former Editor-in-Chief of WAR ROOM.
Stephen Gerras is Professor of Behavioral Science at the U.S. Army War College. Colonel (Retired) Gerras served in the Army for over 25 years, including commanding a light infantry company and a transportation battalion, teaching leadership at West Point, and serving as the Chief of Operations and Agreements for the Office of Defense Cooperation in Ankara, Turkey during Operations Enduring and Iraqi Freedom. He holds a B.S. from the U.S. Military Academy and an M.S. and Ph.D. in Industrial and Organizational Psychology from Penn State University.
Thomas W. Spahr is the  DeSerio Chair of Theater and Strategic Intelligence at the U.S. Army War College. He is a retired colonel in the U.S. Army and holds a Ph.D. in History from The Ohio State University. He teaches courses at the Army War College on Military Campaigning and Intelligence.
The views expressed in this presentation are those of the speakers and do not necessarily reflect those of the U.S. Army War College, U.S. Army, or Department of Defense.
Photo Credit: Gemini
While “speed” is important to military (etc.) decision-making, is not the quality of the decision — and the outcome achieved by the decision — are these not of equal to, and/or of greater, importance than “speed?”
Thus the question: Can an AI not only beat a human as relates to — not only matters of speed in decision-making — but also as relates to matters of the quality of outcomes achieved by such decision-making?
Possibly stated another way: Re: decision-making:
a. While “If we slow the AI to human speed, we’re going to lose,”
b. Likewise are likely to lose; this, if we choose (a) speed of decision-making over (b) the quality of decision-making outcomes?
(Herein, can we “war-game”/test this type of stuff, to get answers to my questions immediately above, for example, by using cases like that of the Cuban Missile Crisis?)
Speed is a measure of performance, not effectiveness. Speed and quality are not mutually inclusive.
So let’s attempt to use “Lucy and the Chocolate Factory” — and “An AI and the Chocolate Factory” — these, to try address some of the matters which may have been — or which may not have been — adequately discussed in this series of podcasts. Here goes:
1. In this first case/this first scenario, we find both (a) a “general knowledge” Lucy (who is not trained or experienced in wrapping chocolates on a conveyer belt, nor is she trained or experienced in doing this job correctly, even if the conveyer belt begins to move ever faster and faster) and (b) a “general knowledge” AI (who, likewise, is not trained and/or experienced in wrapping chocolates on a conveyer belt, nor is this AI trained or experienced in doing this job correctly, even if the conveyer belt begins to move ever faster and faster).
Question — As relates to this first case/this first scenario:
Who do we think will have a better chance of “improvising, adapting and overcoming;” this, so as to adequately deal with these problems — and/or other problems — which might become manifest in this chocolate factory: Lucy, or the AI?
2. In a second case/a second scenario, let us find Lucy, and the AI, (a) both being exceptionally well trained and experienced in wrapping chocolates on a conveyer belt and (b) both being exceptionally well trained and experienced doing this job correctly, even if the conveyer belt begins to move ever faster and faster.
Question — As relates to this second case/this second scenario:
Who do we think will have a better chance of “improvising, adapting and overcoming;” this, so as to adequately deal with these problems — and/or other problems — which might become manifest in this chocolate factory: Lucy, or the AI?
(I’m sure I did not do this good enough/well enough. In this regard, I hope that others will take the time to improve on my model/on my attempt here — or better yet — suggest/present their own questions and comparison models.)
As relates to the title and subject of this series of podcasts, to wit: “BEYOND INTUITION: AI’S ROLE IN STRATEGIC DECISION-MAKING;” as to this such title and subject, let me make some observations and ask some questions — relating mainly to the above terms and concepts:
My Item No. 1: First, re: the term and concept of “strategic decision making,” if this is defined, for example, as the process of identifying the best way to achieve long-term goals and objectives, and if this entails, for example, such things as (a) clearly defining the problem, the strategic goal, the opportunity to be exploited, etc., such things as (b) gathering relevant and reliable information to guide the decision relating to how to achieve these such objectives, such things as (c) generating a wide range of realistic and feasible options and such things as (d) selecting the option that best meets the needs of achieving these such long-term goals, objectives, etc.;
If this indeed is a/the proper definition of “strategic decision-making,” then would I be correct to point out that this such “strategic decision-making,” this (a) is unlikely to need, or want to be made, quickly, (b) is unlikely to utilize intuition; this, rather than reasoning and (c) and is unlikely to be dominated/determined by AIs; this, rather than humans?
(As to my suggestion above, might we consider, for example, as a proper definition of “intuition,” the following: The ability to understand something immediately, without the need for conscious reasoning?)
Bottom Line Thought — Based on My Item No. 1 Above:
Thus, if we are talking “strategic decision-making” here, then it would seem that “intuition” really would not apply, really would not seem to be useful and/or relevant to this process and, thus, (c) really is not something that we would need, and/or want, to use AI’s “speedy” help to achieve?
My Item No. 2: Re: such (tactical?) things as “kill chains,”/”kill decision,” etc. — and in which AI’s “speedy” capabilities might prove useful or even crucial — a question: Is an AI really making any decisions on its own here? Or is the AI simply executing decisions (a) made previously by humans (ex: if enemy personnel enter here, kill them) and which (b) relate to parameters which humans previously entered/set into the AI’s “brain?”
(As you can see — as to certain central terms and concepts of this series of podcasts — I need some real help here!)
Addendum:
Given my definition of “strategic decision-making” — in my comment immediately above — given this such definition, maybe the “Bottom Line Thought,” to my Item No. 1 above, this should have suggested that NEITHER human’s “speedy” intuition — nor AI’s “speedy” capabilities — NEITHER OF THESE are needed, relevant and/or useful to the “strategic decision-making” process?
(This suggesting that something more like “tactical decision-making” might be a better place to consider a contest between/questions relating to human intuition “speediness” v. AI “speediness?”)