Within the information environment, the human-centric cognitive dimension remains most important.
“OMG! Did you see Kim Jong Un’s speech about the fragility of democracy? You didn’t. Oh, I’ll send you the link.”
Except, of course, the video isn’t real. But that does not mean it won’t spread like wildfire and be believed by those who want to believe it. Incorporating artificial-intelligence-enhanced deep-fake audio visuals will be a game changer for information operations, but getting there before our enemies requires a sense of urgency.
Per military doctrine, information operations are focused on targeting the adversary’s decision-making ability, and nothing would do this better than a deep fake that gives orders, tells troops to stand down, or shows a leader corroborating with the enemy. Even the accusation of being a deep fake can cause political unrest, as was seen in Gabon.
Within the information environment, the human-centric cognitive dimension remains most important. The U.S. joint force can destroy enemy combatants, but a war will not end if enemy combatants continue to fight. The U.S. joint force saw this in Iraq, with insurgents even going so far as to publish a magazine in PDF format to promote their alleged victories.
In Iraq, the U.S. joint force never lost a direct battle to the terrorists and insurgents, but that didn’t stop these groups from fighting. This was primarily due to terrorist and insurgent propaganda inspiring combatants to keep fighting. Indeed, ISIS’s former leader, Abu Bakr al-Baghdadi, occasionally released videos calling on his supporters to keep fighting.
In the age of hyper-realistic deep fakes, this could become a common theme even after the mark is neutralized. After all, nothing impacts the cognitive dimension more than a manipulated hyper-realistic audio visual designed for motivation. This can include a charismatic leader or propaganda that enrages viewers. If digital artists can bring Salvador Dali or Tupac Shakur to “life” again, it’s only a matter of time before Usama Bin Laden or any other deceased terrorist will continue to motivate followers from the great beyond.
Visuals are a fantastic marketing tool that can influence consumer behavior, but relatively few studies have been done regarding visuals as they relate to a combat zone. When it comes to messaging within the private sector, one research study concluded that, after three days, a user retained almost 65% of visual information versus a fraction of that for written or spoken communication. If the same holds true on the battlefield, using a hyper-realistic deep-fake audio visual would be a superior means to disrupt an adversary’s decision-making ability.
At the moment, the technology behind deep fakes is rudimentary and is still in development. For now, the main use for deep-fake technology is for pornography. However, this will change in the coming years with advances in artificial intelligence capabilities known as generative adversarial networks (GANs). Such capabilities can produce a realistic visual from an image in seconds (though audio is typically deficient in a video clip). These holograms can be used either in support of U.S. objectives or against them.
Near-peer competitors already use GAN models. Russia’s Internet Research Agency — the same agency that meddled in the U.S. election — has created Facebook pages with GAN models to convey the supposed authenticity of a fake news site known as Peace Data, which even went so far as to hire U.S. journalists to write stories. These efforts were in an apparent, but marginal, attempt to influence the 2020 U.S. election. The People’s Republic of China (PRC) uses GAN models to support 24/7 news reporting.
This is just the beginning of a deep-fake arms race. Deep-fake technology is in the nascent stages of development, but the U.S. joint force can expect to see further weaponization of undetectable deep fakes, such as the notorious perfect deep fake or hyper-realistic deep fake. Digital fingerprinting – ways to detect deep fakes — may be more pervasive in the future, but that could soon change if further resources are not dedicated to detection.
The U.S. government is attempting to rein in deep-fake technology, and the U.S. joint force should ensure that it understands how to detect deep fakes. On that front, the Defense Advanced Research Projects Agency (DARPA) is developing a number of programs that will make detection far easier for military members by looking at basic physics and mismatched accessories.
To ensure a continued competitive advantage, the U.S. joint force must closely examine deep-fake technology in the near future and incorporate it in military operations to support its own objectives.
Don’t think it matters? In the age of video conferencing, video tele-conferences often stand in for face-to-face communication, but if a call is hacked and in place of a commander is a deep-fake representation giving commands, this would seriously jeopardize any mission.
Potential scenarios are almost limitless. Hackers could penetrate the Democratic People’s Republic of North Korea’s lines and use deep fakes to launch a nuclear strike. Likewise, a leadership decapitation strike could be rendered ineffective if the leader is continuously resurrected with the help of a hyper-realistic deep-fake model inspiring followers to attack coalition forces.
These scenarios may seem far-fetched, but these are the types of challenges that deep-fake technology may soon pose to the U.S. joint force, and since this technology is already impacting the operating environment, it is imperative that U.S. service members understand how to identify deep fakes at the lowest level to mitigate their influence.
Deep fakes could be a game-changer for U.S. military operations as well. Psychological operators currently focus primarily on leaflet drops or the dissemination of products online through sock puppet accounts. While this is an important function, the information environment has since evolved to the point where, thanks to Photoshop and similar tools, edited images are so pervasive (and relatively cheap) that almost everyone is capable of creating them. To truly influence an enemy within the information environment, hyper-realistic deep fakes with near-perfect audio-visual representation could be used to disrupt enemy operations in their tracks.
To ensure a continued competitive advantage, the U.S. joint force must closely examine deep-fake technology in the near future and incorporate it in military operations to support its own objectives. Photoshopped images of malignant actors in compromising positions may be beneficial, but delivering a hyper-realistic portrayal of a high-value target on video takes psychological warfare to an entirely new level. Hyper-realistic audio visuals can sow self-defeating insecurity within an enemy unit if released through the correct avenue, and hyper-realistic deep fakes could be a game-changer for all levels of war — strategic, operational, and tactical.
The U.S. joint force needs to do everything in its power to harness this technology. Deep-fake technology needs to be taught within the confines of primary military education and must be incorporated in the military planning process.
Moreover, the U.S. joint force cannot afford to have critical detection technology such as digital fingerprinting bottlenecked at places like DARPA. The U.S. joint force needs this technology now. It must be pushed down to the lowest levels of military elements to ensure that deep fakes can be identified and flagged before they impact the information environment.
Clearly, GAN models are the ultimate military deception tool, and they need to be incorporated into all military plans. Otherwise, adversaries will begin to use them against U.S. assets at an alarming rate, and the examples shown here are just the tip of the iceberg.
Maj. Matthew Fecteau is a graduate of the Harvard Kennedy School of Government and an information operations officer with the U.S. Army. Follow him on Twitter @matthewfecteau. He can be reached at email@example.com. The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, the U.S. Army, or the Department of Defense.
Photo Description: Screen capture from the DeepTomCruise series. Left: Chris Ume, VFX and AI Artist and creator of the series. Right: Digitally created deep fake of actor Tom Cruise.