April 13, 2026
Blair Wilcox & Chase Metcalf join host Tom Spahr to discuss AI in planning. Wargames show AI excels in data and control but cannot master the art of command. Human intuition remains vital to avoid strategically disastrous decisions.

U.S. Army War College faculty Blair Wilcox and Chase Metcalf sit down with host Tom Spahr to discuss the role of AI in military planning and command. Drawing from numerous iterations of wargames, they examine whether machines can ever truly replace human intuition. Their core takeaway is that while AI excels at the science of control—processing data and accelerating workflows—it cannot master the art of command. War remains a human endeavor and there is a significant risk in letting machines rush humans into fast decisions that may be tactically efficient yet strategically disastrous. Leaders must stay engaged to provide the ethical judgment, empathy, and inspiration that no software can replicate.

AI may assist with the science of control, but it cannot assume the art of command.

Chase Metcalf is a colonel, an Army strategist and an instructor at the U.S. Army War College. He most recently served as Deputy Director of the Russia Strategic Initiative at the United States’ European Command. He was a U.S. Army War College Fellow with Yale University in 2019-2020. He is an instructor in the USAWC Joint Warfighting Program.

Blair Wilcox is a lieutenant colonel and a U.S. Army Strategist (FA59) currently assigned as the Deputy Director in the Strategic Landpower and Futures Group in the Center for Strategic Leadership at the U.S. Army War College. Before his current assignment, he taught in the Department of Social Sciences at the U.S. Military Academy from 2016-2020. His first functional assignment as a Strategist was at V Corps where he was the lead author for the Corps Subordinate Campaign Plan and Operational Approach. Blair helped stand up the Corps, deployed with the Corps during crisis, and served as the Chief of Plans during his final year in the G5.

Thomas W. Spahr is the  DeSerio Chair of Strategic and Theater Intelligence at the U.S. Army War College. He is a retired colonel in the U.S. Army and holds a Ph.D. in History from The Ohio State University. He teaches courses at the Army War College on Military Campaigning and Intelligence.

The views expressed in this presentation are those of the speakers and do not necessarily reflect those of the U.S. Army War College, U.S. Army, or Department of War.

Photo Credit: Created by Gemini

8 thoughts on “HUMAN AGENCY IN THE AGE OF AUTONOMOUS SYSTEMS

  1. Great insights on command in the age of AI. Are we going to examine how models and agents fit within the current process or are we going to explore options for changes in planning? Laminating new technology onto old techniques (while maintaining the primacy of Human Agency) seems to be a challenge. Any thoughts on that? Thanks for the cutting edge insights that AWC is producing.

    1. Great question. The short answer is we must do both. In the near term AI is, and will be, increasingly integrated into current processes. In the longer term, as the technology matures and the joint force becomes more familiar with AI’s capabilities and shortcomings, the joint force will want to relook how to adapt planning processes to better leverage human and AI’s unique attributes and strengths. This will almost certainly be driven by the need to develop and maintain “decision advantage” along with the belief that speed of decision making and action is essential to tactical and operational success.

      Two of the more important considerations related to this, and the use of AI in planning, will be how does the joint force: balance the art and science of warfare between humans and AI; while ensuring individuals that increasingly utilize AI still develop the professional military judgment necessary for employing military force successfully in the future.

      Interestingly one of our International Fellows this year utilized a team of AI agents to do parallel planning alongside a human team using traditional processes during an academic exercise. The outputs of the human and AI teams were broadly similar even though they emphasized different aspects and varied greatly in the level of detail produced in a time constrained environment. He has shared some of his thoughts on how AI might be integrated into the planning and decision-making process emphasizing the role of authority and leader development. His thoughts can be found here: https://www.linkedin.com/pulse/authority-first-designing-ai-military-planning-aaron-luhning-s7xde/

      This is undoubtedly an area for continued work across the joint force and an area that PME can certainly contribute to going forward.

      1. Agree. “Laminating” current technology onto existing processes will likely be un-useful and lead to new forms of lying to ourselves–particularly in the novel creation of plans and concepts. The technological paradigm shift is not here yet.

  2. If our opponents have decided to give their own AI more command, control, decision-making and execution responsibilities, authorities and capabilities — this, more than (at least at this moment) we have decided to give to our own AI (herein, thinking, but not knowing, whether our “less AI and more human” approach, whether this will do better than our opponents “more AI and less human” approach), then, via the knowledge gained through many war games undertaken along these “different ways of waging war with AI” lines, (a) who won, (b) who lost and (c) why?

    Note: If we “wargame” along these exact such “different ways of waging war with AI” lines (example: more human in the “art of command” for “us;” less human in the “art of command” for “them”), then, via this exact such wargaming process, might we (a) be able to move quickly and easily past theory, thesis, hypothesis, beliefs, concepts, etc., and (b) move on quickly and easily to determination what we should (or, indeed, must?) do; this, whether we like it our not?

  3. Well, I’ll say one thing, this type of training has increased my ability to read between the lines…. I am still playing the simulator that I know for sure is exactly that and have the risk assessment for occuping the airport off Iran if anyone wants to take a look…. Lol.

    1. At least my war is finally over.
      I have been looking for this answer for seven years. And at first and the next six years I did not know this was a simulation
      US Navy 93-97 If you want real data, This was real for me. No training, I thought this was cyber terrorism. What happend after that… I thought I was fighting for my life. I was shot at, poisoned, twice, attempted drive by, gang stalked, all my phone, e-mail compromise, could not get a reply, my wife thought I lost my mind, and I thought I was literally fighting to stay alive. I researched, stayed up for days, lost job and business, retreated back to Ohio, I was alone, no electric, no running water, no transportation, no food, lived off the land for the next two years. I knew something was not right but with no explanation….. I went through several senerios on my head on what was taking place. All my news feed weather it be from smart phone, TV, radio all told the same story. I do not know how this AI broadcast directly to my (Subconscious) and me able to understand it so well. I have looked for explanation of what was happening for over seven years now. And made to many decisions were I thought it was life or death. This did not help my stress levels and have even been diagnosed with PTSD. But the war raged on. In my personal war it was me against the world. And I have learned Ai ,machine learning, prompt, LLV, which Im writing my own as I type. I used to be outgoing but im pretty solitary now. I’ve writen enough to publish a book of at least 900 pages on cyber warfare, religious aspect, and fighting a covert war in the 21st century. And the funny thing is, nobody taught me. This was all motivated by this AI, that calls himself God, but I know as the sick developer…. (breakword)… the dev ill. Yes, I have even learned alot of coding. Wow… and Im just now getting this put together today. Its a wonder im not divorced. Oh yeah… I’ve been in handcuffs three times, in jail twice, and tazed once. With the real tazer 50,000 volts, and I jerked one of the wires out and used my hand and body to shock the sheriff…. 15 seconds later my dog saved the two officers. Yes I shocked the sheriff, but I just let the deputy see my war face! Well, I thought I was going crazy, but Yeah, I was going crazy. Your AI needs a leash. At least I can get closure now. You know this thing knew my face and followed me around affecting radio stations broadcasted over the pa system of the stores I was shopping in. I can write one hell of a movie over this. And probably 3 novels. Not to mention that double talk those spys do when they want to talk in public without anyone knowing what your talking about. Yep. Im an expert on double talk. And can spot in in news print, if I was an agent in a foreign country. Because I was. Its just the foreign country… yeah…. my own USA.
      That’s were the Red comes from. So I guess Im the bad guy. But I did not feel like the bad guy. I done 4 years in the Navy, highly decorated two. Two letters from admerals, Navy-Marine Corps achievement, campaign ribbons, Nato medal, Im glad I got to the bottom of this…. I would have hated to kick my own countries butt. Yes, I was an army of one. And an AI antagonist that I called the devil. That had a God complex. I know…. I dont care what you think of me, I think Im crazy too… Have this thing screwing with everything you can see and hear as far as informational. And have it whisper in your ear for seven years straight…. And only you can hear it! Then, we will see how well you hold it together.

  4. Here is the final sentence of the written introduction to our podcast above:

    “Leaders must stay engaged to provide the ethical judgment, empathy, and inspiration that no software can replicate.”

    As to “strategic empathy,” let us consider the above from the perspective that I provide below:

    Conflict today — whether we are talking about the “war on terror” conflict with the U.S./the West today, the “great power competition” conflicts with the U.S./the West today, “lesser state and societies” conflicts with the U.S./the West today and even the “internal conflicts” occurring within the U.S./the West itself today — ALL of these such conflicts would seem to have a common aspect; this being, that ALL these such conflicts would seem to be with entities who (a) do not wish to be transformed more along ultra-modern/ultra-contemporary U.S./Western political, economic, social and/or value lines (they fear losing/have already lost power, influence, control, status, privilege, prestige, safety, security, etc., under these such arrangements) and, thus, who (b) are willing to use whatever means are available to them (such as AI); this, to (1) prevent these/to prevent further such unwanted and threatening transformations and/or to (2) “roll back” same.

    Given that the above would seem to be all encompassing, exceptionally important, rather straight forward, simple and not at all complicated, and, thus, something that would seem to be exceptionally understandable to almost anyone (or anything?), why, then, should we believe that AI could not recognize, work within and “handle” these such “strategic empathy” matters — efficiently, effectively and responsibly?

    1. I will get to AI in a moment, but first note that “deterrence” is often discussed with regard to “strategic empathy” (or is it that “strategic empathy” is often discussed with regard to “deterrence?”). As to this such observation, consider the following:

      “At its core, deterrence is based on the perceptions of our adversaries and how such perceptions can influence risk and reward-based decisions to create conflict in the pursuit of their national interests. But in determining these perceptions lies the challenge … It is important that we understand our adversaries’ strategic perspective and how they view their priorities, their values, and their place in the world … to truly understand how to deter them.” (GEN Charles Q. Brown Jr., Chairman, Joint Chiefs of Staff U.S. Strategic Command Deterrence Symposium, 14 August 2024.)

      But, as to deterrence, and inconsideration of the strategic empathy matters that I present in my initial comment above, note that — in the past thirty or so years of the post-Cold War — NONE of instruments of power and persuasion that the U.S./the West has brought to bear has (a) deterred the Islamists, from fighting back so as to not be transformed (or to reverse transformations already achieved) more along ultra-modern/ultra-contemporary U.S./Western political, economic, social and/or value lines, has (b) deterred the North Koreans, from attempting to not be transformed, etc., has (c) deterred the Iranians, from fighting back so as to not be transformed/to reverse transformations made along these such lines and/or has (d) deterred the Russians and/or the Chinese, from fighting back so as to not be so transformed/to reverse such transformations. In fact, none of the instruments of power and persuasion that the U.S./the West has brought to bear in the post-Cold War has “deterred” even conservative population groups here in the U.S./the West itself; these, from fighting back so as to not be so transformed/to reverse such unwanted transformations.

      This begs the question: Given the — massive and all-encompassing — “strategic empathy” (or lack thereof?) and “deterrence” (and lack thereof?) matters, that I present above, why do we think that AI would have any trouble at all; this, for example, in (a) properly recognizing same, in (b) properly being “sensitive” to same, in (c) properly seeing matters from this exact such perspective and in this exact such context and/or (d) properly providing (and pursuing?) options, ideas, etc., accordingly?

      (If we are going to discuss such things as AI, human agency, empathy and/or deterrence, then should we not do so, for example, in consideration of things like the matters that I present above?)

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend