The real challenge with black swan events is not accurate anticipation, but timely recognition.
It is useful to remind ourselves regularly of the capacity of human beings to persist in stupid beliefs in the face of significant, contradictory evidence.
On 22 June, 1941*, the Third Reich launched Operation Barbarossa, a massive invasion of the Soviet Union. The assault, named after the red-bearded (“barba rossa” in Italian) German crusader and emperor Frederick I, involved over 3.5 million Axis troops, killed millions, and almost destroyed the Soviet Union. Although the attack is sometimes called a “surprise,” this is misleading. It is more accurate to say that Barbarossa surprised the one person who could not afford to be: Josef Stalin. How could a military operation involving about 150 divisions have found its political target so unprepared?
Life is full of the unexpected. The term “black swan event” describes surprises of an especially momentous and nasty type. Popularized by the mathematician Nicholas Nassim Taleb in his 2007 book of the same title, Taleb argued that black swan events have three characteristics: “rarity, extreme impact, and retrospective (though not prospective) predictability.” In recent years, the concept of black swan events has gained currency in political, military, and financial contexts.
The black swan has a venerable history as an illustration of the ancient epistemological problem of induction: simply stated, no number of observations of a given relationship are sufficient to prove that a different relationship cannot occur. No amount of white swan sightings can guarantee that a different color swan is not out there waiting to be seen. The discovery of black swans by European explorers in Australia has proven too tempting to ignore as a powerful metaphor for the problem of induction.
When writers invoke the black swan today, they usually refer to Taleb’s meaning, not the older, philosophical sense. This is unfortunate, because in emphasizing the importance of anticipation, Taleb’s concept of the black swan ignores key facts about history and how it is understood by those who live it. I highlight two problems of note.
First, the list of things that can happen but have not happened yet is long. It is, in fact, infinitely long. For each thing that exists (e.g., cats) we can come up with more variations that do not, to our knowledge, exist (flying cats, cats with gills, six-legged cats, and so on). It is fun to think about all the cataclysmic, history-altering events that might happen, but thinking about those things in a way that appropriately organizes them and informs strategy is extremely hard.
Second, events that we think of in hindsight as tremendous shocks that come without warning have often taken months, years, or even longer periods to unravel. In the time between the weak signals of change and the onset of a deeper crisis, there are often opportunities to prepare and adapt. In considering how we often collapse time when thinking about the past, I am reminded of a hilarious exchange from the movie Fletch:
Doctor: You know, it’s a shame about Ed.
Fletch: Oh, it was. Yeah, it was really a shame. To go so suddenly like that.
Doctor: He was dying for years.
Fletch: Sure, but… the end was very… very sudden.
Doctor: He was in intensive care for eight weeks.
Fletch: Yeah, but I mean the very end, when he actually died. That was extremely sudden.
The events of the past can seem much more sudden in hindsight.
The real challenge with black swan events is not accurate anticipation, but timely recognition. While it can be useful to imagine what might happen, we should focus more on recognizing what is happening as quickly as possible, and limiting the damage through timely learning.
The black plague took half a decade to advance from Sicily to the Baltic states. More recently, the 2008 financial crisis is already remembered as a “shock” event that surprised global finance. However, the truth is more nuanced, and depressing. Notable observers of the system (including Dr. Taleb) recognized serious problems long before the fall of Lehman Brothers in September, 2008 (and the onset of a full-blown banking crisis). Yet this was mostly recognition, not prediction. The clearest early signal of big trouble in the mortgage market came in the March-April, 2007 collapse of New Century Financial, an originator of risky mortgages, almost a year and a half before Lehman’s end, and a year before Bear Stearns was rolled up. What happened in the meantime? In All the Devils Are Here, Bethany McLean and Joe Nocera describe two embattled Bear Stearns asset managers who provide a microcosm of the wishful thinking that made the crisis much worse than it needed to be. In the face of mounting evidence that their investment strategy is failing, “the two men simply couldn’t bring themselves to believe that the picture was as dire as the model suggested.”
Returning to Barbarossa, the German military spent months building up its forces prior to the attack. Stalin received two conflicting explanations for this. The first explained the accumulation as part of a planned German invasion, and came from elements of Stalin’s own intelligence apparatus. Much of this intelligence provided minute and accurate details about the coming attack. For example, in The Road to Stalingrad, John Erickson writes that at the beginning of June, Richard Sorge, a Soviet spy in Japan, “sent Moscow, in an astonishing compilation of data about ‘Barbarossa,’ the objectives, the ‘strategic concepts,’ the strength of the German troops to be committed, and the opening date for the attack on the Soviet Union.”
The second explanation — offered by (surprise!) the Germans themselves, and substantiated by the more sycophantic parts of Stalin’s intelligence system — argued that the German buildup in the East was simply part of preparations for an assault on the United Kingdom. According to this view, the Germans were moving forces out of reach of British bombers and attempting to deceive British leadership as to their true intentions in what Erickson calls “a huge feint.” Conveniently, this second theory for the German military buildup undermined the first explanation as the work of (again, in Erickson’s words), “agents provocateurs aiming to embroil Germany and the Soviet Union in war.”
Stalin went with the second explanation, largely because it was what he wanted to believe. He created a system of intelligence categorization that made him more and more certain of his rightness as the attack grew closer – sources of intelligence that contradicted what Stalin wanted to believe were classed as “unreliable,” while sources that confirmed his views were “reliable.” We now know the magnitude of his error. There is an enduring lesson here.
When the facts do not match our strong theories for how the world works, we prefer to change the facts. How can we more quickly recognize the unexpected for what it really is?
In Frank Tashlin’s classic children’s book, The Bear That Wasn’t, a bear awakes from hibernation and, exiting his cave, finds himself in a huge factory that has been built over his forest home. Encountering a foreman, the bear is told to get back to work, to which the bear replies, “I don’t work here. I’m a bear.” Incredulous, the foreman says, “You’re not a bear. You are just a silly man who needs a shave and wears a fur coat.”
Aside from its entertainment value, the Bear that Wasn’t provides a humorous example of a profound philosophical problem: When the facts do not match our strong theories for how the world works, we prefer to change the facts. How can we more quickly recognize the unexpected for what it really is? The foreman (along with various executives that the bear meets) has a simple belief: No bears are in factories. (Stalin’s version: No German forces are preparing to attack us.)
If you remember your logic from school, you might recognize this statement as resembling a “major premise” of a syllogism, a deductive argument used to derive conclusions that necessarily follow from two premises. Major premises generally take the form of universal statements, e.g., “All men are mortal,” or, “No pigs can fly.”
If we have a theory of factories that says (among other things), “No bears are in factories,” the theory is based on our experiences observing who is actually in a factory (i.e., human workers). It is an inductive theory: every observation to date has been of human workers. We could not arrive at such a theory independent of our accumulated experience. In addition, the more workers we see, the more certain we become (in terms of probability) that all workers are human (and none are bears), but we will never, ever observe every possibility.
Although we should not make the unjustified leap from making a probabilistic statement based on induction to a universal statement based on deduction, we often do it anyway. Our beliefs then shape how we treat the evidence. For example, prior to seeing a non-white swan, we develop the following syllogism:
- Major Premise: All swans are white.
- Minor Premise: That bird is a swan.
- Conclusion: That bird is white.
When we see a black swan, if we are unemotional, Spock-like empiricists, we will immediately recognize that “if swan, then white” is false. That is, we will know that our conclusion, “that bird is white,” is false based on observation that the bird is black and a swan. Finding ourselves in a situation in which we believe that our premises are true but our conclusion is false, and therefore not entailed by the premises, we will conclude that our major premise must not be true, and therefore reject it.
Here is where human experience departs from the clean abstractions of logic. We are not Spock. We have emotional attachments to our beliefs. When we observe a non-white bird that in every respect but color looks like a swan, we can conclude two different things. First, we can accept the notion that it is a swan (even though it is not white), and conclude that our original theory of swans was false. But if the whiteness of swans has become an article of faith for us, we are far more likely to conclude that the bird is not a swan, despite its other similarities. After all, “If not white, then not a swan,” is a legitimate derivation of our original belief that “all swans are white.”
I suggest three maxims to help avoid dangerous failures of recognition, and speed learning when unexpected things happen.
First, everything we believe about the world is provisional – “serving for the time being.” Adding the words “so far” to assertions about reality reminds us of this.
Second, unjustified certainty is very costly. The greater your certainty that you are right when you are wrong, the longer it will take you to recognize and incorporate new data into your system of belief, and to change your mind. General Douglas MacArthur was a confident man, and this confidence usually served him well, such as when he undertook the risky landings at Incheon in the Korean War. Yet MacArthur’s confidence betrayed him when China entered the war. He was certain that this would not happen, and MacArthur’s certainty delayed his recognition of a key change, exposing forces under his command to terrible risk. Confidence in your beliefs is valuable only insofar as it results in different choices (e.g., I choose A or B). Beyond that point, confidence has increasing costs. (Philip Tetlock has produced outstanding work on this topic.)
Third, pay special attention to data that is unlikely in light of your current beliefs; it has much more information per unit, all else equal. In this sense, information content is measured as the potential to change how you think about the world. Information that is probable in light of your beliefs will have minimal effects on your understanding. Improbable information, if incorporated, will change it.
It is doubtless correct that many awful things that have not happened before will yet happen. Foresight regarding such events would be nice. It would, however, be nicer still if we could recognize more quickly what is happening right in front of us.
*Coincidentally, 76 years ago on this day.
Andrew A. Hill is the Chair of Strategic Leadership at the U.S. Army War College, and the Editor-in-Chief of WAR ROOM. The views expressed in this article are those of the author and do not necessarily reflect those of the U.S. Army, or U.S. Government.
Photo Credit: Max Alpert / RIA Novosti