Interpreting a reconnaissance photo during the Cuban missile crisis
Interpreting a reconnaissance photo during the Cuban missile crisis, 1962

In January 2022, US President Joe Biden declassified an extraordinary trove of intelligence to warn that Russia was about to invade Ukraine. But Ukrainian President Volodymyr Zelensky didn’t believe him. Zelensky complained to reporters that the US was hyping the threat and hurting his economy. “I’m the president of Ukraine, I’m based here and I think I know the details deeper than any other president,” he said. Within a month, 100,000 Russian troops attacked on three fronts.

Zelensky has since become a heroic figure of courage and leadership. But he’s also human – and his inability to see the invasion coming offers a powerful reminder of the challenges of intelligence analysis. Everyone knows, but we often forget, that predicting the future is hard. Experts loaded with data are often wrong. Doctors offer incorrect diagnoses and investors back the wrong companies. Intelligence is even harder, because data is scarce and adversaries are working hard to deceive. In foreign policy, the world out there is difficult to anticipate.

But that’s only part of the story. The other part – which often gets overlooked – is us: human brains are not wired to assess the future well. Psychology research finds that people rely on mental shortcuts to process the information bombarding them each day. These shortcuts are called cognitive biases. They enable faster thinking and efficient decision-making, like skimming Yelp reviews to pick a restaurant and voting on party-line tickets rather than carefully assessing each candidate for every local position. But cognitive biases are also prone to error, tricking our brains into thinking likely events are unlikely and vice versa.

Certain biases have played an outsized role in intelligence, from China’s entry into the Korean War, and the Cuban missile crisis, to weapons of mass destruction (WMD) in Iraq and Ukraine’s unwillingness to believe Russia would invade. Events seem like surprises, even when they shouldn’t have been.

Optimism bias and probabilities

The most likely explanation for Zelensky’s dismissal of US intelligence warnings was that he didn’t want them to be true. He suffered from optimism bias, or a tendency to view information through the lens of wishful thinking. He’s got plenty of company. Optimism bias is everywhere. Researchers have found that people believe good events will happen to themselves more than others, that their own investments will outperform the average, that their favourite sports team has a higher chance of winning than it actually does. Optimism distorts the assessment of information.

Politicians can be highly susceptible to proposals claiming favourable estimates of success, whether it’s humanitarian intervention in Somalia, war in Libya or renewed sanctions against Iran. The leaders of nearly every country involved in the First World War predicted a swift victory. “Give me 700,000 men and I will conquer Europe,” said General Noël de Castelnau, chief of staff for the French army at the outbreak of the war. Castelnau got 8.4 million men – more than ten times the number he said he needed – but Europe wasn’t conquered. It was decimated.

Humans are not objective, no matter how hard we try to be. Researchers have found that people are wedded to their initial beliefs and will cling to them even in the face of contrary evidence. In Roman times, the imperial physician Galen was so confident in his remedies that no evidence could prove him wrong. He wrote, “All who drink of this treatment recover in a short time, except those whom it does not help, who all die. It is obvious, therefore, that it fails only in incurable cases.” Francis Bacon warned of the dangers of confirmation bias in 1620, writing, “The human understanding when it has once adopted an opinion . . . draws all things else to support and agree with it.”

We are also terrible with probabilities. Ever wonder why people are more afraid of dying in a shark attack than a car crash, even though fatal car accidents are about 60,000 times more likely? It’s not because people are stupid. The problem is availability bias: people naturally assume something they can easily recall is more likely to occur. Vivid, horrifying events (like shark attacks in the news) are generally easier to remember than everyday occurrences (like car accidents) even though they’re statistically less likely.

And intelligence analysts are human, like everyone else. We can see the powerful grip of recent experience in how different intelligence analysts viewed information about whether “the Pacer” walking around a suspicious Pakistani compound in 2011 was Osama bin Laden. Estimates about whether they had found the al-Qaeda leader ranged from 40 to 95 per cent. Those who had lived through the intelligence failure of overestimating Saddam’s WMD programs were more sceptical of the intelligence and issued lower probability estimates that the Pacer was bin Laden, while those coming off recent counterterrorism successes issued more optimistic assessments. What you predict depends on what you’ve experienced. It was only after US Navy Seals raided the compound that the evidence was in: they had found bin Laden after all.

Thinking like the enemy

Good intelligence analysis also involves understanding how others think. Here, too, psychology research reveals why it’s so hard to get inside the minds of others. One of the most challenging problems is called the fundamental attribution error, which is the tendency to believe that others behave badly because of their personality, while we behave badly because of factors beyond our control. Or put more simply, people often jump to blaming others while letting themselves off the hook. Drivers think that someone cutting them off must be a jerk, rather than wondering if there’s an emergency requiring them to drive that way.

In foreign policy crises, one country often overestimates the hostile motives of another, or underestimates how threatening its own actions could appear, or both. In the First Gulf War and the Iraq War, both the US and Iraq largely misunderstood the other’s view. Iraq underestimated the United States’ risk tolerance following the Cold War and 9/11, and the US failed to see that Saddam was much more concerned about the Iranian threat than the American one. In both 1990 and 2003, an incomplete picture of the other side’s intentions and motives led to costly conflict.

Trying to think like your adversary can backfire too, thanks to a process called mirror-imaging, which is the tendency to estimate how someone else will behave by projecting what you would do in the same situation. Framing your assessment with “If I were Xi Jinping,” or “If I were a Russian intelligence officer” is one the greatest pitfalls in intelligence analysis and foreign policy decision making.

The Cuban missile crisis of 1962 saw one of the most dangerous instances of mutual mirror-imaging, leading the world to the nuclear brink. American analysts failed to weigh the costs and benefits of a nuclear missile deployment to Cuba through Soviet eyes. And the Soviets appeared to make the same mistake when estimating how the US would react to its secret gambit. Historian Arthur Schlesinger Jr. called the Cuban missile crisis “the most dangerous moment in human history”. Why? Because each side believed the other thought like they did.

Improving analysis to prevent nuclear catastrophe isn’t just a matter of history. Great power competition is back. Russia and China are trying to rewrite the international order along authoritarian lines. Their nuclear arsenals are expanding and modernising at alarming rates. And war is raging once again in the heart of Europe. Today, Nato is grappling with how it can help Ukraine defeat Russia without provoking World War III. The stakes are high, the policy challenges are complex, and intelligence analysis is pivotal to success. What have we learned about how humans think and how we can do better?

After the Iraq WMD intelligence failure, the US Intelligence Community (IC) launched several initiatives to improve analysis. One was commissioning a study by academics from different fields to examine ideas in the social sciences and recommend ways for the IC to adopt them. I was one of the experts on the study, and I jumped at the chance. One of our recommendations was that the Intelligence Community needed to test its analytic methods in a rigorous way to see whether, how and why some approaches worked better than others. There are about 20,000 analysts in US intelligence agencies making thousands of estimates about geopolitical developments every year. And yet these agencies had never assessed which forecasts were right, which weren’t, and why.

Inside the mind of a superforecaster

Several intelligence officials ran with the idea of testing. They approached one of our panel members, Phil Tetlock, a leading scholar on expert judgment. Tetlock had recently finished a 20-year study examining how well experts forecasted the future. It was the first large-scale scientific experiment of judgment ever conducted. In it, Tetlock asked a group of nearly 300 experts to make thousands of predictions about economic and political issues of the day: nearly 82,000 predictions in all. His main finding was depressing. The average expert was about as accurate as a dart-throwing chimpanzee, and their predictions were about the same as random guessing. The chimpanzee discovery made headlines. Less noticed was Tetlock’s other finding: predicting the future wasn’t just lucky guessing. Some people did much, much better than average. He called them superforecasters.

In 2011, the IC launched a forecasting tournament to explore what makes some people better forecasters than others. Over the next four years, five teams competed, answering questions every morning ranging from the price of oil at a given time to the prospects of conflict between China and Japan in the East China Sea. There were nearly 500 questions in all. The winner was the team with the best accuracy at predicting things that did not happen as well as things that did. Tetlock and his colleague Barbara Mellers put together a team of thousands of volunteers called the Good Judgment Project. The University of Michigan and MIT each fielded a team. The Intelligence Community did, too. The fifth team was a control group. “By varying the experimental conditions, we could gauge which factors improved foresight, by how much, over which time frames,” Tetlock later reflected.

By the end of the second year, the results were staggering: Tetlock’s volunteer team was winning by a lot. They outperformed other university-affiliated teams at Michigan and MIT by such large margins (30-70 per cent), the other schools were dropped from the competition. Tetlock’s team also beat professional intelligence analysts with access to classified data.

The best of the best on Tetlock’s team, the superforecasters, weren’t geniuses with multiple PhDs. They were artists and scientists, students and retirees, “Wall Streeters and Main Streeters”, as Tetlock put it. Tetlock found that the secret to success wasn’t sky-high brilliance or a Jeopardy!-like mastery of news and facts. It wasn’t what superforecasters knew. It was how they thought. Superforecasters are far more open-minded, curious, careful and self-critical than the average person. They believe that reality is complex, nothing is certain or inevitable, and that ideas should always be subjected to evidence, testing and updating. They have what Tetlock calls “Dragonfly eyes” – the ability to see a problem with the 30,000 lenses of a dragonfly’s eye. While high-functioning groups harness the wisdom of different perspectives that each member brings, superforecasters can harness the wisdom of different perspectives from within, all by themselves.

Perhaps most encouraging, Tetlock and his colleagues have since found in their experiments that superforecasting skills can be learned through training, practice, measurement and feedback. One basic 60-minute tutorial improved accuracy by ten per cent through a tournament year. That may not seem like a lot, but imagine if you had a ten per cent better chance of getting a promotion at work than all the other candidates, or your financial adviser could guarantee a ten per cent better return on your investments. When it comes to forecasting important things, ten per cent can be larger than you think.

The future of intelligence

Humans are also getting help from machines. Advances in artificial intelligence offer potentially game-changing capabilities to process volumes of data at radically new speeds and scales, identify patterns across data sooner and more easily, and empower human analysts in new ways.

In 2020-21, I served on an expert task force that examined future intelligence challenges and opportunities arising from emerging technologies. The task force’s research included dozens of interviews and deep-dive sessions with technology leaders and intelligence experts across the US government, private sector and academia. We found that artificial intelligence (AI) and its associated technologies (such as cloud computing, advanced sensors and big data analytics) could transform every core intelligence mission.

We also found that the greatest promise of AI comes from augmenting human analysts, not replacing them. By automating mundane tasks that absorb tremendous amounts of time, AI has the potential to free human analysts to concentrate on higher-level thinking that only humans can do well – like assessing an adversary’s intentions, considering broader context, examining competing hypotheses and identifying what evidence may be missing.

Harnessing the power of AI to improve intelligence analysis won’t be easy. Even the most advanced AI algorithms have severe limitations, including the inability to explain how they reach results and the tendency to fail unexpectedly in the face of seemingly small changes – like mistaking a picture of a sloth for a racing car when imperceptible changes are made to the image. AI also raises important ethical concerns. And adoption requires overcoming technical, bureaucratic and cultural hurdles.

The future of intelligence analysis can be better than the past. Our greater knowledge of cognitive biases, combined with help from AI and training in superforecasting, offers tremendous promise. Notably, US intelligence agencies correctly anticipated Russia’s invasion of Ukraine and issued public warnings, declassifying and sharing intelligence in real time to an unprecedented degree. The US Intelligence Community stuck by its assessment, even when Ukraine’s president doubted it.

Physicist Richard Feynman once said that analysis is how we try not to fool ourselves. He’s right. Estimating the future will always be a hazardous occupation, especially in intelligence. The world outside is hard to predict, and the world inside our own minds is prone to error.

But failure is not inevitable. Trying not to fool ourselves is a worthy endeavour, and evidence suggests we are likely to get better at it over time. We must continue to learn more about the processes and practices that make some individuals and groups more accurate than others, and embrace new technologies that enhance insight.

This is an edited extract from Amy Zegart’s “Spies, Lies and Algorithms” (Princeton University Press).

This piece is from the New Humanist autumn 2022 edition. Subscribe here.