December 12, 2024
At the end of 2019 the Office of the Under Secretary of Defense for Intelligence (OUSDI), in cooperation with WAR ROOM, announced an essay contest to generate new ideas and elevate thinking about insider threats and how we respond to and counter the threat. There was a fantastic response, and we were thrilled to see what everyone had to write on the topic. Ultimately, after two rounds of competitive judging, two essays rose to the top. And so it is with great pleasure we present the runner up’s submission. Stay tuned to WAR ROOM next week as we present the winning submission.

 

Human beings—people—are built to be active agents, not passive objects whose behavior is the value of a variable or the result of a functional equation.

In the counter-insider threat (CIT) world of the United States Department of Defense (DoD), the concept of a “behavioral indicator” is a primary basis for deciding whether an individual ought to be considered a threat to DoD resources. Examples of behaviors that are taken to indicate a potential threat range from hostility in the workplace, to being in debt, to breaking rules. Generally, behaviors are taken to be discrete data points, more or less self-evident, and separable from the person who is doing the behaving. Given this, understanding what people are doing and why is presented as a data problem requiring analysis that yields an objective result, rather than a problem of meaning requiring an interpretation that yields a judgment. The behavioral indicator concept justifies the idea that countering insider threats is about gathering seemingly disparate, discrete data and assembling it to reveal the truth about what an individual is really doing.

Unfortunately, the behavioral indicator concept is scientifically indefensible. As applied to people, the phrase references the idea that people behave identically to machines because they are machines, an idea that has a long history, enjoys widespread support, but is demonstrably wrong. Analytical methods based on this idea are equally wrong, and this calls into question not only what behaviors allegedly indicate, but also attempts to use such claims to justify organizational reactions to ‘problematic’ personnel. Consequently, the continued use of the concept to counter insider threats is self-defeating for the DoD. Why is this the case?

The logic of the idea is this: the behavior of machines is a function of their internal mechanisms; thus, observable behaviors reveal the objective, internal state of the machine (much like the rotation of clock hands are taken to reveal the objective, internal state of a clock). Since people are machines, their behaviors reveal the inner state—the functioning—of the biopsychological systems causing the behavior. In observing behavior, we are therefore justified in making claims about the true nature of the individual.

Applied to the CIT world, any behavior that the community deems strange, problematic, maladaptive, threatening, can be understood as prima facie evidence that there is a problem with the individual’s biopsychological mechanisms. The logic not only promotes concentration on negative aspects of the individual, but also taints conclusions about what sort of problem is indicated by the behavior, whether the problem is serious or not, and even whether the behavior indicates a threat.

This is the view that currently dominates in the DoD CIT community, largely because American military and political leaders have, for nearly a century, turned to disciplines built on the idea that people are machines. A core component of the DoD’s educational effort on detecting insider threats instantiates a version of this idea: Shaw and Sellars (2015) “Critical-Path Method.” The method is presented as effective in analyzing behavior—data sets—for insider threats because the authors rely on an understanding of behavior as mechanical: “Personal predispositions” (e.g., a psychiatric disorder) are triggered by external “stressors” (e.g., a poor performance review), which causes “concerning behaviors.” This viewpoint explains behavior in terms of stimulus-response and cause-effect. Because these processes are mechanical and impersonal—they happen to people, not because of them—the actual people doing the behaving are presented as unintelligent, passive objects. Persons are presented as riders on their own body’s behavior train, unable to steer, hop off, or avert looming disaster. Stopping or diverting these processes therefore requires others to intervene.

Scientifically, however, people are not machines. While some parts of human beings are indeed automated (e.g., the operation of the liver, contractions of the heart muscle), to claim that stimulus-response/cause-effect relationships explain human social action requires that we believe that the human voluntary nervous system does not exist. Human beings—people—are built to be active agents, not passive objects whose behavior is the value of a variable or the result of a functional equation.

Since our voluntary nervous system allows us to use language independently of the functioning of our brains, for instance, our social and cultural actions are just that—conceptually-based, value-oriented actions—not chemical-electrical behaviors. In short, while we need a brain, chemicals, and electricity, these physical entities and processes do not tell us what to say or do. Evidence is all around us that we can and do act in ways other than what we were just doing, not because we are forced to, but because we want to. This is illustrated even in the most stressful of human activities, such as combat where, for instance, a Marine can decide that his target is not an enemy and so stops pulling a trigger he is already in the process of pulling as USMC Sergeant Samuel J. Stevens (2008) reveals he did in his Marine Corps Gazette article, “Psychology of the Good Guys.”

If this is true, then human social life—one example of which is threatening an organization using one’s status as a trusted insider—can only be realistically understood in terms of ethics, values, choices, decisions, interpretations, understandings, and so on. So, any particular ‘behavior’—“shaking” for instance—can be said to be characteristic of a person in the midst of stealing secrets, but it cannot be said to be indicative. The latter term applies to real patients: people who are suffering from biological malfunction, Parkinson’s Disease for example. Shaking because one suffers from Parkinson’s Disease is indeed ‘behavior,’ but shaking because one is stealing secrets is a personal, embodied judgment about the moral risk of the action being undertaken.

To think that shaking is the same kind of thing in both cases—despite both relying on the same nervous systems—is to think a there is no difference between an epileptic seizure and a DoD leader signing a new policy statement…which is simply absurd. Such absurdity would require us to take the phrase “He’s got a screw loose” literally.  Authorities sent Las Vegas shooter Stephen Paddock’s brain to a neuropathologist to check for such a “loose screw” after being frustrated by the lack of a behavioral indicator that supported any of the usual trigger-mechanism-behavior explanations for mass murder.

The call in this essay to replace “behavioral indicator” with “understanding action as ethical decision-making” is designed to avert self-defeat in the CIT effort. What kind of self-defeat? If people are machines, and the source of problematic behavior is inside of them, unknown to them, and unchangeable by them, then it is only ever the individual that appears as the target of organizational action. This amounts to the elimination of the reality that people act in order to be (in Prince Hamlet’s terms) a certain kind of person, usually one who counts among a community of others who recognize him or her as such. A good example of this is the CIT educational “placemat” that lists all the behaviors that, allegedly, indicate “inherent” trustworthiness (or not) of an individual. No such complementary version of the “inherent” decency and respectfulness (or not) of DoD managers, leaders, commanders, is anywhere to be found…because there is no such thing as interactive, social life according to the machine model.

This exposes an “us and them” socio-political distinction built-in to current DoD CIT efforts. Commonly heard calls to “do the right thing” and to “see something [and so] say something” place the burden of CIT on individuals—predominantly junior members of the organization—to act rightly and report what they see. Yet, the DoD conveys a lack of trust in them by, for instance, basing CIT efforts on a model of “them” as unintelligent machines who need to be watched because, at any point, given any of a range of behavioral indicators, they may need to be forced to do the right thing. Is it any wonder that such calls are more often met by silence than by reporting? What sort of community cohesion could be expected to emerge in an organization where anyone’s loyalty can (and must be) questioned given that the machines do not know what they’re doing? And if we take a moment to consider the problematic track record of DoD components in acting appropriately, consistently, fairly, justly in response to reports of problematic behavior—a reality that is simply missing as a contextual consideration for how to make CIT efforts effective—organizational calls to do the right thing likely sound to community members like institutional double-speak.

Former Deputy Director of the NSA, Chris Inglis, called out shared values— a sense of community oriented on shared ethical commitments—as a key, missing quality of the NSA that permitted, if not invited, Edward Snowden to leak classified information. To continue to treat people as objects, directly or indirectly, obviously or subtly, through the use of the concept of behavioral indicators is a recipe for self-defeat. The lesson is plain: drop behavioral indicators as a primary basis for CIT and focus on ethics because the values that people—organizational managers and leaders included—live, promote, suggest, demand, and so on are the source of both being, and not being, an insider threat.

Frank Tortorello, Jr. has supported the United States Marine Corps and the National Reconnaissance Office as a contracted social scientist. The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army War College, the U.S. Army, or the Department of Defense.

Photo Description: Man Wearing White Hooded Jacket

Photo Credit: Photo by sebastiaan stam from Pexels

3 thoughts on “‘BEHAVIORAL INDICATORS’: AVOIDING SELF-DEFEAT IN COUNTERING INSIDER THREATS

  1. I agree with this author in part. A former U.S. Air Force enlisted man told of being stationed on Puerto Rico where they had a beach for the enlisted personnel to enjoy their time off. He was assigned as driver for a visiting general. They drove down to the beach and the general saw all the enlisted personnel there. He called an officer to order all the enlisted personnel off the beach. After that was done, the general spent two hours on the beach by himself.

    Part of the problem is that when we see senior officers and generals viewing the service as their private and personal country club, then this behavior can spread to the lower ranks.

    The Honest Truth About Dishonesty: How We Lie to Everyone – Especially Ourselves by Dan Ariely discusses how bad behavior spreads.

    Of course, one should look at what other contacts and friendships one has. Bad behavior can spread. See

    How Behavior Spreads: The Science of Complex Contagions by Damon Centola

    https://www.the-american-interest.com/2018/08/03/plucking-out-the-heart/

    Of course the same mechanisms which can spread bad behaviors could be used to spread better behaviors. One problem is does DOD have a handle on non-obvious relationships which can influence behavior?

    Finally, any system used should be tested empirically. As Tetlock and Gardner point it out, one should be challenged to attach a probability to the reliability of a forecast for indicators. Then one can calculate the accuracy of the forecasts. This might provide a method for sorting out the methodologies.

    Superforecasting: The Art and Science of Prediction by Philip E. Tetlock, Dan Gardner

  2. Great and clarifying reading it is. An innovative approach to integrate ethics in the DoD personnel to reduce inside threats from individuals dooes make sense. I was surprised to realize how much behaviorist approach is considered to educate people as machines, as Pavlov´s dogs. A very individualistic behaviorist approach and its light bounds with social organizations ethics make a weak foundation for Estate security. The essay points out a serious weakness that explains why individuals working for DoD may lean towards becoming undesired insider threats. Moving forward into the authors plain lesson would improve the development of a better, more coherent society. A moral society that walks its founding principles.

  3. John passed away on Tuesday. Just want you to know. GIM, DAIN, FODDER. He is on your friend list in the steam of it all. I read this and I believe you get OUT what went in. Some people are built a certain way, others think they are built the same way. They are wrong. Love ya cousin.

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend