January 5, 2025
In September 2024, Andrew Hill and Steve Gerras authored a compelling three-part series that explored the evolving role of artificial intelligence (AI) in national defense. They provocatively argued that the conventional wisdom is wrong: human intuition is not indispensable, even though we might all wish it were. Recognizing the significance of this debate, War Room invited the authors to join host Tom Spahr for a deeper dive into their perspective. This episode, the first of a two-part discussion, dissects the complexities of human intuition, examining its inherent limitations and the potential for AI to surpass human capabilities in an increasingly automated battlespace.

In September 2024, Andrew Hill and Steve Gerras authored a compelling three-part series that explored the evolving role of artificial intelligence (AI) in national defense. They provocatively argued that the conventional wisdom is wrong: human intuition is not indispensable, even though we might all wish it were. Recognizing the significance of this debate, War Room invited the authors to join host Tom Spahr for a deeper dive into their perspective. This episode, the first of a two-part discussion, dissects the complexities of human intuition, examining its inherent limitations and the potential for AI to surpass human capabilities in an increasingly automated battlespace.

I think our thesis, if I were to break it out, is that biology has no inherent advantages in terms of technology. Breaking that up a little bit, we believe the indispensable superiority of human intuition is a dangerous assumption that is unjustified by the facts.

Andrew Hill is Professor of Strategic Management in the Department of Command, Leadership, and Management (DCLM) at the U.S. Army War College. Prior to rejoining the War College in 2023, Dr. Hill was the inaugural director of Lehigh Ventures Lab, a startup incubator and accelerator at Lehigh University. From 2011-2019, Dr. Hill was a member of the faculty at the U.S. Army War College. In 2017, he was appointed as the inaugural U.S. Army War College Foundation Chair of Strategic Leadership. Dr. Hill is also the founder and former Director of the Carlisle Scholars Program, and the founder and former Editor-in-Chief of WAR ROOM.

Stephen Gerras is Professor of Behavioral Science at the U.S. Army War College. Colonel (Retired) Gerras served in the Army for over 25 years, including commanding a light infantry company and a transportation battalion, teaching leadership at West Point, and serving as the Chief of Operations and Agreements for the Office of Defense Cooperation in Ankara, Turkey during Operations Enduring and Iraqi Freedom. He holds a B.S. from the U.S. Military Academy and an M.S. and Ph.D. in Industrial and Organizational Psychology from Penn State University.

Thomas W. Spahr is the  DeSerio Chair of Theater and Strategic Intelligence at the U.S. Army War College. He is a retired colonel in the U.S. Army and holds a Ph.D. in History from The Ohio State University. He teaches courses at the Army War College on Military Campaigning and Intelligence.

The views expressed in this presentation are those of the speakers and do not necessarily reflect those of the U.S. Army War College, U.S. Army, or Department of Defense.

Photo Credit: Gemini

3 thoughts on “BEYOND INTUITION: AI’S ROLE IN STRATEGIC DECISION-MAKING
(PART 1)

  1. With regard to the problem of properly (guessing?) the — unpredictable — and often even illogical — decisions of various commanders in various scenarios, (a) who has the “edge” AI or human intuition and (b) why?

    1. A different, but possibly a related — or at least a possibly relevant — question:

      If an AI, rather than a human, is “running the show”/is making the decisions, then does this give the entity using the AI an advantage or a disadvantage; this, given that:

      a. The decisions made by AIs, in various scenarios, these might be much more predictable — because the AI’s decisions are based more on logic, data, patterns, etc. — whereas:

      b. The decisions made by humans, in various scenarios, these might be much less predictable — because the human’s decisions may be based on illogic, intuition, a “gut feeling,” etc.?

      1. Re: my question in my comment immediately above, would it be proper to suggest that:

        a. While the decision’s made and/or recommended by the AI might be more predictable — because they are more likely to be based on such things as historical examples and the “tried and proven” —

        b. The decision’s made and/or recommended by the human elements, these might be less predictable — because they are more likely to be based on the “here and now” and, thus, are more likely to include such things as “experimentation?”

Leave a Reply

Your email address will not be published. Required fields are marked *