When Michael Chertoff took over 11 months ago as the secretary of the Department of Homeland Security, he vowed that the department would adopt a risk-based approach. No one really argued. It made sense to focus resources on the country's biggest risks.
But as the post-Katrina flood waters rose and then fell, it became clear just how daunting a task that would be. Why, wondered an enraged public, was the country so ill-equipped to deal with the situation in New Orleans, when the flooding of that city had long been one of the biggest risks identified by the Federal Emergency Management Agency? And how can citizens be sure that the country isn't neglecting preparations for whatever man-made or natural disasters might be next on the horizon?
CSO set out to explore those risks and get a read on what the government and private sector are doing to address them. In talking with the nation's top experts about the country's risk terrain, post-Katrina, what we found was not encouraging. Not only is there nothing approaching agreement about what those risks are, but experts have not even decided on some basic definitions.
"You don't have agreement to, 'What is risk?'" notes Randall Yim, former director of the Homeland Security Institute, a federally funded research center in Arlington, Va. "And if you asked 10 different people about the country's biggest risks, they would probably rank 10 different priorities, even within a region."
More confounding, those people's answers might be on completely different planes: While one person might rank an avian flu pandemic as a top risk, another might speak more philosophically about the possibility that in an effort to improve security, the United States will destroy the very liberty it is trying to protect.
In light of this lack of consensus, we decided to pick apart the three components that make up a standard risk equation—scenario, probability and consequence—and talk about how they each apply to the nation's risks. What we found is that the first component, scenario, challenges the imagination; and the second, probability, defies knowledge. But the third component of risk—consequence—is the outcome of the first two and the most important place to focus one's energy. Here's why.
Scenario: Why planners need to overcome the availability bias
In risk management, as in the real world, a scenario is simply what might happen: Terrorists might hijack commercial airliners and fly them into buildings. The power might go out in a huge swath of the United States. A Category 4 hurricane might hit a major metropolitan area. When experts talk about the country's biggest risks, like these, they're actually talking about low-probability, high-consequence scenarios—things that aren't likely to happen in any given year, but that, if they did, would be extremely damaging. The problem is, the human mind just isn't equipped to deal with this type of risk.
"If we handled low probability and high consequence well, nobody would buy a lottery ticket," says John R. Harrald, director of the Institute for Crisis, Disaster and Risk Management at The George Washington University.
Risk perception experts call this the availability bias. "Psychologically, if something hasn't happened, we focus on the low probability and say we aren't going to worry about it," Harrald says. "Once it has happened, we focus on the consequence. We plan for things we can either remember or imagine." For instance, when Harrald asks people in Maryland who live east of the Chesapeake Bay whether their house is in danger of flooding, the answer is often: "'It didn't flood in [Hurricane] Isabel.' That's their mark. It's a very natural reaction," he says.
This is why the National Oceanic & Atmospheric Administration (NOAA) says the United States has a "hurricane problem"—not because it gets hit by hurricanes, but because 80 percent to 90 percent of Americans who live in hurricane-prone areas have never experienced the core of a severe hurricane. "Many of these people have been through weaker storms," according to NOAA.gov. "The result is a false impression of a hurricane's damage potential. This often leads to complacency and delayed actions, which could result in the loss of many lives."
This availability bias is also why the country spent the past four years focusing on scenarios involving terrorism, after the so-called failure of imagination that preceded 9/11. What have politicians and citizens done for the past four years if not imagine terrorism?
And it's why many observers are now questioning whether the country should have spent that time planning not for terrorism but instead for other potential catastrophes. Like a deadly pandemic. Or major earthquake. Or hurricanes.
"One of the key dangers is that people are always focusing on the last catastrophe," says Robert Muir-Wood, the London-based chief research officer for Risk Management Solutions, which does economic risk modeling for the insurance industry. "It's a big challenge to keep everything in perspective and not be biased by what has last happened."
A true risk-based approach means that, when all else is equal, one must override the availability bias and focus on the most likely future scenarios. Unfortunately, figuring out the probability of any given scenario raises its own set of complexities.
Probability: Why it works better for natural disasters than for terrorism
The probability component of risk is simply how likely it is that a scenario will come to pass. From this standpoint, Hurricane Katrina wasn't just predictable; it was almost inevitable. The Gulf of Mexico has been producing hurricanes since long before New Orleans was settled, and it was only a matter of time before a Category 4 storm hit the city. (Indeed, it's only a matter of time before a Category 5 storm hits the city directly; Katrina was downgraded shortly before landfall.)
When it comes to hurricanes, predictions abound—tangible, science-based predictions. Based on activity between 1944 and 1999, for instance, NOAA data shows that New Orleans has a 40 percent chance of getting hit by a hurricane or tropical storm in any given year. For Miami, and Cape Hatteras, N.C., two of the riskiest locations in the United States, the probability is 48 percent. If travelers want to know the probability of a hurricane striking during the week of their Florida time-share, an NOAA FAQ will help them do the math.
Floods, too, are predictable. That's why the Army Corps of Engineers creates detailed flood maps that delineate areas likely to flood every 100 years or every 500 years. (Although even those predictions, as Muir-Wood notes in "Three Not-to-Miss Risks," Page 33, may be called into question.) Flood modeling is why the New Orleans levees were built to certain heights. It's also why property owners in some areas have to purchase flood insurance to obtain financing.
Likewise, the spread of any given disease is relatively predictable. If experts know how transmissible a disease is and how people move around, they can model very effectively how quickly it will spread. If they also know the fatality rate of the disease, they can model the number of fatalities likely to occur amongst age groups. This is the kind of disease modeling that has scientists so alarmed about a scenario in which H5N1 avian influenza mutates and spreads easily from human to human.
It seems logical to presume that the Department of Homeland Security can and should plan mathematically for events of this type. But here's the rub: DHS has to simultaneously deal with natural disasters and domestic terrorism. And terrorism is an entirely different story. It's the old apples to oranges analogy. In fact, it's more like apples to, oh, snow tires.
"With things of a human origin, it's harder to objectively figure out the probabilities," says Baruch Fischhoff, professor of social and decision sciences at Carnegie Mellon University and current president of the Society for Risk Analysis. "To the best of my knowledge, people are not doing credible analysis on the risks facing this country, and if they were, who's to know that those are static probabilities?"
Terrorists learn and adapt. They can improve the probability that they will launch a successful attack in the United States through research and practice. Similarly, the United States can decrease the probability that a terrorist attack will occur—by shutting down air travel in the days after 9/11, for instance, or by creating a "no-fly" list that prohibits certain people from commercial flights. Given the range of human behavior, trying to pin down the resulting probability is next to impossible.
Figuring out probabilities is also intensely political. Just consider the debate about the role of risk in divvying up federal DHS funds. Uproar over the funding formula started when an early budget proposal would have given landlocked Wyoming seven times as much funding per capita as New York State, and it hasn't stopped yet.
Even trying to figure out terrorism probabilities is intensely political. Retired Adm. John Poindexter's controversial FutureMAP proposal, part of the disbanded Total Information Awareness program, would have established a futures exchange where terrorism experts could "bet" on national security scenarios, thus yielding probabilities about which were considered the most likely. Critics railed against this program as a "terrorism betting parlor," and the project was canned.
This inability to figure probabilities opens the door for spending on terrorism to be driven not by logic, but by mainstream media, hysteria, local economics—and, of course, politics.
The good news, if you can call it that, is that determining probabilities with very low numbers may not necessarily be worth the time anyway. "You have something called ALE: average loss expectancy," says security pundit Bruce Schneier, CTO of Counterpane Internet Security. "You multiply the probability of an event happening with the amount of damage you'll incur, and that'll tell you how much to spend on security. When you deal with events that have a very, very high damage [amount], and a very, very low probability of occurrence, you multiply infinity by zero and get whatever you want."
All of which is why the more important question to ask may not be "What's next?"—although that's an enticing question—but "What's the set of potential consequences?" "I believe with great passion that everything is hard to predict," says Peter Bernstein, financial market guru and author of the best-seller Against the Gods: The Remarkable Story of Risk. "We never know what the future holds. When I say that to people, their heads go up and down, but they still act as if they know what the future holds.
"We don't know what's going to happen," Bernstein continues, "but there's a range of outcomes out there. The ones that may make a difference are the ones you really have to make preparations for. If someone is walking around my house with a lit match, I have to worry about it. It doesn't mean my house is going to burn down, but if it does, it's going to be a disaster."
Tim Williams, CSO of Nortel Networks, says he simply wouldn't want to discard any risk as a low-probability one. "In this day and age, it's hard to determine what's a low-probability event, given what we've seen over the past years," he says. "When you see all the issues that have occurred, such as war and natural disasters, the tsunami and all the rest—those were all low-probability, but they happened. I think our whole concept of recognizing what are low-probability and high-impact events has substantially changed. The universe of what can happen is much larger. We've had our minds opened."
Consequence: When probability fails, focus on universal recovery planning
Consequences were in the headlines for weeks after Hurricane Katrina. Yes, Katrina was a natural disaster, one that broke trees like twigs and tossed cars like coins. But everything that happened after the winds let up was man-made. The levees failed—a consequence of the way they were built and maintained. Eighty percent of the city flooded—a consequence of having positioned homes and businesses below sea level on land that relied on the levees to stay dry. Basic infrastructures failed—sometimes because critical systems or backup generators were placed at ground level. As many as 60,000 people were left stranded at the Superdome for days—a consequence of critical personnel leaving the city to care for their own families, and of confusion over where state and local responsibilities ended and federal ones began.
No one could have stopped the storm, of course, but the country could have better controlled the consequences. That's why experts say that instead of playing a game of pin-the-probability-on-the-scenario, a more helpful approach is to mitigate the consequences of whatever happens, through good preparation.
"It's fun to think about low-probability, high-impact things," says Dave Kent, CSO of Genzyme (speaking like a true CSO). But ultimately, he says, it doesn't really matter which specific event punches out your data center, keeps your employees from getting to work, disrupts communications or electricity, or causes a pandemic.