• United States



CSO Disclosure Series | Reporter’s Notebook: The United States of TMI

Feb 25, 200822 mins
CSO and CISOData and Information SecurityPhysical Security

Lead paint in toys. Brain-eating amoeba. Identity theft. Drowning in sand. We know more than ever about the risks all around us. Do we know what disclosing them all is doing to us?

I’D LIKE TO SAY that the writing that had the most profound effect on me this year was some classic novel I picked up in my spare time, but in fact it was an Associated Press article. Last June, AP Medical Writer Mike Stobbe wrote a fascinating, harrowing story about large holes dug in beach sand that can collapse “horrifyingly fast” and cause a person in the hole to drown. Stobbe described one case when a teenager ran back to catch a football, fell in a hole and disappeared under a cascade of sand. When his friends approached to help, more sand caved over him. He was buried for at least fifteen minutes and eventually suffocated. Stobbe discloses in the article that, while they’re virtually unheard of, collapsing sand holes are actually more common than “splashier threats” like shark attacks.

Unfortunately, I read the story right before going on vacation with my family, to the beach. Sometimes, the story trespassed on my mind. I found myself scanning the beach for holes left behind by beachgoers who didn’t know about the monster that lived in the sable but unstable sand. I wondered why I would voluntarily give my kids shovels and pails–the very tools of their demise. I’m actually worried about the beach–the beach!–swallowing up my kids.

And that’s not all I’m worried about. After a summer of sand terror, and tracking mosquitoes with Triple-E and dead birds with West Nile Virus, I fretted to see a constant stream of headlines like … Brain-eating amoeba kills 6 this year. Drugmakers recall infant cough/cold medicine; ConAgra shuts down pot pie plant because of salmonella concerns; Listeria precaution prompts recall of chicken & pasta dish.

Then came the Great Lead Recalls of 2007, when parents learned that everything from toys to tires are laden with toxic heavy metal. Oh, and my toothpaste might have a chemical in it that’s usually found in antifreeze and brake fluid.

Also, MRSA, the so-called superbug that resists antibiotics, is “more deadly than AIDS” and a new strain of adenovirus means that now the common cold can kill me. Also, my Christmas lights have lead in them. Finally, I found’s page called “Tainted Food, Tainted Products” where I could track all of the products that were potentially deadly to me, including everything from mushrooms containing illegal pesticides to lead-bearing charity bracelets. Charity bracelets!

It’s enough to make you want to hide from the world in your basement–provided of course you’ve tested it for excessive levels of radon.

photo of bucket of sand

IN MANY WAYS, 2007 was The Year of Disclosure.

When this idea first came to me, I wasn’t thinking about the sand. I was thinking about information security, as I was writing a reasonably disheartening story about serious malware threats while also researching dozens of the thousands of data breach disclosure letters that were issued this year now that 38 states have disclosure laws.

But then, throughout the fall, I started to notice that risk disclosure was becoming one of those news phenomena that eventually earns its own graphic and theme music on cable news. It earned landing pages on Web sites with provocative names like “Tainted Food, Tainted Products.”

It feels like there’s more risk disclosure than ever before–an endless stream of letters about identity theft, disclaimers in drug commercials, warnings on product labels, recalls and, of course, news stories.

But it’s not just the volume of disclosure but also its changing nature that’s wearing me down. Disclosure is more pre-emptive than ever. We know about risks before they’re even significant. Many of the state data breach disclosure laws, for example, mandate notification at the mere possibility your private information has been compromised.

Even more bizarre and stressful, disclosure is becoming presumptive. The cough medicine recall, for example, involved a product that a consumer advocate said was safe when used as directed. (ConAgra’s pot pie shut down also involved a product that company officials declared posed no health risk if cooked as directed). The risk that forced cough syrup off the shelves was that if you give a child too much medicine, it could lead to an overdose, which seems reflexively obvious. Essentially the disclosures amounted to: Not following directions is dangerous.

Perhaps the most insidious change is with the rare but spectacular risks. The sensational tales of brain-eaters and sand killers. Such stories have always existed, of course, but something is different now, and that’s the Internet. Ubiquitous access combined with the bazaar potential publishers means the freakiest event can be shared by millions of people. Anyone can read about it, blog about it, link to it, forward it in e-mail, and post it as a Flash video, but there’s no impetus for them to disclose the risk responsibly or reasonably. Their agenda may even call for them to twist the truth, make the risk seem more or less serious than it is.

Here’s the paradox that rises from all of this: As an individual and consumer, I like disclosure. I want every corporate and civic entity I place trust in to be accountable. I want journalists and scientists to unearth the risks I’m not being told about. At the same time, while any one disclosure of a threat may be tolerable, or even desirable, the cumulative effect of so much disclosure is, frankly, freaking me out.

So I started to wonder, at what point does information become too much information? Is more disclosure better, or is it just making us confused and anxious? Does it enable us to make better decisions, or does it paralyze us? What do the constant reminders of the ways we’re in danger do to our physical and mental health?

To answer these questions, I sought out two leading experts on risk perception and communication: Baruch Fischoff and Paul Slovic, both former presidents of the Society of Risk Analysis. I told them that I wanted to better understand risk perception and communication, the effect of ubiquitous access to risk information, and what we could do about this disclosure paradox.

But really I was hoping for some salve. Some way to stop worrying about sand holes at the beach.

“IT’S A REALLY DIFFICULT topic,” says Baruch Fischoff. “On the one hand you want disclosure, because it affirms that someone is watching out for these things and that the system is catching risks. But on the other hand, there’s so much to disclose that it’s easy to get the sense the world is out of control.”

Little research exists on the physical health effects of any risk disclosure, never mind the cumulative effects, although media saturation is being blamed for increased anxiety, stress and insomnia–gateways to obesity, high blood pressure, depression and other maladies. But the mental health effects of so much disclosure are reasonably well understood. Research suggests that it’s not only unproductive, but possibly counterproductive.

To understand how, I was sent to look up research from the late 1960s, when some psychologists put three dogs in harnesses and shocked them. Dog A was alone and was given a lever to escape the shocks. Dogs B and C were yoked together; Dog B had access to the lever, but Dog C did not. Both Dog A and Dog B learned to press the lever and escape the shocks. Dog C escaped with Dog B, but he didn’t really understand why. To Dog C the shocks were random, out of his control. Afterward, the dogs were shocked again, but this time they were alone and each was given the lever. Dog A and Dog B both escaped again, but Dog C did not. In fact, Dog C curled up on the floor and whimpered.

After that, the researchers tested the idea with positive reinforcement, using babies in cribs. Baby A was given a pillow that controlled a mobile above him. Baby B was given no such pillow. When both babies were subsequently placed in cribs with a pillow that controlled the mobile, Baby A happily triggered it; Baby B didn’t even try to learn how.

Psychologists call this behavior “learned helplessness”–convincing ourselves that we have no control over a situation even when we do. The experiments arose from research on depression, and the concept has also been applied with regards to torture. It also applies to risk perception. Think of the risks we learn about every day as little shocks. If we’re not given levers that reliably let us escape those shocks (in the form of putting the risk in perspective or giving people information or tools to offset the risk, or in the best case, a way to simply opt out of the risk), then we become Dog C. We learn, as Fischoff said, that the world is out of control. More specifically, it is out of our control. What’s more, sociologists believe that the learned helplessness concept transfers to social action. It not only explains how individuals react to risk, but also how groups do.

MY FAVORITE LEARNED HELPLESSNESS experiment is this one: People were asked to perform a task in the presence of a loud radio. For some, the radio included a volume knob, while for others no volume knob was available. Researchers discovered that the group that could control the volume performed the task measurably better, even if they didn’t turn the volume down. That is, just the idea that they controlled the volume made them less distracted, less helpless and, in turn, more productive.

Control is the thing, both Fischoff and Slovic say. It’s the countervailing force to all of this risk disclosure and the learned helplessness it fosters.

We have many ways of creating a sense of control. One is lying to ourselves. “We’re pretty good at explaining risks away,” says Slovic. “We throw up illusory barriers in our mind. For example, I live in Oregon. Suppose there’s a disease outbreak in British Columbia. That’s close to me, but I can tell myself, ’that’s not too close’ or ’that’s another country.’ We find ways to create control, even if it’s imagined.” And the more control–real and imagined–that we can manufacture, Slovic says, the more we downplay the chances a risk will affect us.

Conversely, when we can’t create a sense of control over a risk, we exaggerate the chances that it’ll get us. For example, in a column (near the bottom), Brookings scholar Gregg Easterbrook mentions that parents have been taking kids off of school buses and driving them to school instead. Part of this is due to the fact that buses don’t have seat belts, which seems unsafe. Also, bus accidents provoke sensational, prurient interest; they make the news far more often than car accidents, making them seem more common than they are.

Yet, buses are actually the safest form of passenger transportation on the road. In fact, children are 8 times less likely to die on a bus than they are in a car, according to research by the National Highway Traffic Safety Administration (NHTSA). That means parents put their kids at more risk by driving them to school rather than letting them take the bus.

Faced with those statistics, why would parents still willingly choose to drive their kids to school? Because they’re stupid? Absolutely not. It’s because they’re human. They dread the idea of something out of their control, a bus accident. Meanwhile, they tend to think they themselves won’t get in a car accident; they’re driving.

photo of school bus

DREAD IS A POWERFUL force. The problem with dread is that it leads to terrible decision-making.

Slovic says all of this results from how our brains process risk, which is in two ways. The first is intuitive, emotional and experience based. Not only do we fear more what we can’t control, but we also fear more what we can imagine or what we experience. This seems to be an evolutionary survival mechanism. In the presence of uncertainty, fear is a valuable defense. Our brains react emotionally, generate anxiety and tell us, “Remember the news report that showed what happened when those other kids took the bus? Don’t put your kids on the bus.”

The second way we process risk is analytical: we use probability and statistics to override, or at least prioritize, our dread. That is, our brain plays devil’s advocate with its initial intuitive reaction, and tries to say, “I know it seems scary, but eight times as many people die in cars as they do on buses. In fact, only one person dies on a bus for every 500 million miles buses travel. Buses are safer than cars.”

Unfortunately for us, that’s often not the voice that wins. Intuitive risk processors can easily overwhelm analytical ones, especially in the presence of those etched-in images, sounds and experiences. Intuition is so strong, in fact, that if you presented someone who had experienced a bus accident with factual risk analysis about the relative safety of buses over cars, it’s highly possible that they’d still choose to drive their kids to school, because their brain washes them in those dreadful images and reminds them that they control a car but don’t control a bus. A car just feels safer. “We have to work real hard in the presence of images to get the analytical part of risk response to work in our brains,” says Slovic. “It’s not easy at all.”

And we’re making it harder by disclosing more risks than ever to more people than ever. Not only does all of this disclosure make us feel helpless, but it also gives us ever more of those images and experiences that trigger the intuitive response without analytical rigor to override the fear. Slovic points to several recent cases where reason has lost to fear: The sniper who terrorized Washington D.C.; pathogenic threats like MRSA and brain-eating amoeba. Even the widely publicized drunk-driving death of a baseball player this year led to decisions that, from a risk perspective, were irrational.

THE BEST EXAMPLE of the intuitive brain fostering bad decision-making is terrorism, which produces the most existential nausea of all. On a group scale, it can be argued that decisions following 9/11 were poor, emotional and failed to address the risks at hand. Not only that, those decisions took necessary but limited resources away from other risks more likely to affect us than terrorism. Like hurricanes.

The effect is identical with individuals. Ask 100 people which is a bigger danger to them, getting five sunburns or getting attacked by terrorists, and many will cite the latter.

That’s intuitive. Terrorism is, well, terrifying. But it’s also exceedingly rare. In this excellent paper, University of Wisconsin Professor Emeritus Michael Rothschild deigns to conjure the awful to make an important point. He shows that if terrorists were able hijack and destroy one plane per week and you also took one trip by plane per month in that same time, your odds of being affected by those terrorist attacks are still miniscule, one in 135,000.

Even if that implausible scenario played out, you would still be about 4.5 times more likely to die from skin cancer next year (one in 30,000) and 900 times more likely to get skin cancer if you’ve had five sunburns in your life (one in 150).

But that doesn’t matter. For sunburns, we have all kinds of ways to exert control and make us feel less helpless: hats, SPF 50 lotions, perceived favorable genetic histories and self-delusion–“My sun burn isn’t as bad as that guy’s.” We have volume knobs for that radio.

For terrorism, we have no volume knobs.

ONE WAY TO CREATE them is to kindle that analytical part of our brains–to provide as much factual context about the risk we’re disclosing as possible. The more we can provide, the more we can override our dread.

Unfortunately, the lack of control over who discloses what and how they do it has actually fostered less context and fewer useful statistics, not more. So toys in my house have lead in them. What does that mean? Moreover, what are the chances someone in my house will be affected by that lead, and how?

Without context, we are anxious. Risks appear both more random and more likely than they really are. We lack of control. We curl up and whimper.

Why don’t we include context with our risk disclosure? For some, it would run counter to their agenda for disclosing the risk. “Brain-eating amoeba kills 6” gets me to click. “Rare bacteria with 50 million to one chance of affecting you”–not so much.

Fischoff believes that another factor at play is what he calls the myth of innumeracy. “There’s an idea that people can’t handle numbers, but actually there’s no evidence to support that,” he says. “I believe there’s a market for better context with risks.”

Slovic is less certain. He says numeracy varies widely, both because of education and predisposition. “Some people look at fractions and proportions and simply don’t draw meaning from them,” he says. To that end, Slovic thinks we need to create images and experiences that help us emotionally understand the risk analysis, that make us feel just how rare certain risks are. Make the analytical seem intuitive.

It’s difficult to do this well. For example, Slovic remembers once when he was trying to explain chemical risks that occurred in the parts-per-billion range. People can’t relate to what a part-per-billion means, “so we said one part per billion is like one crouton in a thousand-ton salad,” he says. He thought people would get that. Unfortunately the analogy backfired. It made people exaggerate the threat, not downplay it. Why? “Because I can imagine a crouton,” says Slovic. “I can hold it in my hand, and eat it. I can’t really picture what a thousand ton salad looks like. So I focus on the risk, the crouton, because I can’t compare it to a salad that I can’t even imagine.”

Another common mistake when putting risk in context is to use the wrong numbers, specifically focusing on multipliers instead of base rates, says Fischoff. For example, cases of the brain eating amoeba killing people have tripled in the past year. Yikes! That’s scary, and good for a news story. But the base rate of brain-eating bacteria cases, even after rate tripled, is six deaths. One in 50 million people. That’s less scary and also less interesting from the prurient newsman’s perspective.

But even that “one in 50 million” characterization is problematic. It still causes people to exaggerate the risk in their minds, a phenomenon called “imaging the numerator.” In one experiment that showed the dramatic effect of imaging the numerator, Slovic notes, psychiatrists were given the responsibility of choosing whether or not to release a hypothetical patient with a violent history. Half the doctors were told the patient had a “20 percent chance” of being violent again. The other half were told the patient had a “one in five” chance of being violent again.

Startlingly, the doctors in the “one in five” group were far more likely not to release the patient. “They lined up five people in their minds and looked at one of them and saw a violent person.” They imaged the numerator. On the other hand, 20 percent is an abstract statistic that hardly seems capable of violence.

It sounds illogical, but our minds think that “one in five” is riskier than “20 percent.”

ESPECIALLY WITH PARENTS, one of those illusory controls we use to manage so much risk disclosure is to look at our own experiences, or lack of experiences. To wit, “I had lead toys as a kid and, well, I turned out okay.” In fact this is precisely what I told myself when I scanned the beach for killer sand holes. Look, I had spent parts of every summer of my life–maybe a year total–at the beach, and, well, I turned out okay. In fact, I had never witnessed, never even heard of the collapsing sand phenomenon.

But I knew after talking to Fischoff and Slovic that this was mostly self-delusion, and not nearly enough for the analytical part of my brain to override the intuitive part. I’d need more to stop worrying about the beach eating my kids. So I decided to try to put math up against the myth. To come up with some rigorous analysis, and relatable images that would, finally, scotch the dread.

The sand story quotes a Harvard doctor who witnessed a collapsing sand event and who has been making awareness of it his personal crusade ever since (the intuitive response to risk, like parents pulling their kids off the bus). The doctor even published a letter in the New England Journal of Medicine to raise awareness of “the safety risk associated with leisure activities in open-sand environments.” In the letter, he documents 31 sand suffocation deaths and another 21 incidents in which victims lived, occurring mostly over the past 10 years. He believes these numbers are low because he based them only on news reports that he could find.

Using rough math, let’s assume he’s right that there were more cases, and double his figures, guesstimating 60 deaths and another 40 cases where victims lived over the past decades. That’s 100 cases total, or about ten per year, including six deaths and four more where the victim lived. Of course, not everyone on the planet goes to the beach, so let’s say that the beachgoer population comprised only one percent of the world population in the last year, about 60 million people. (In fact it’s probably higher than this, but assuming a smaller number of people at the beach to begin with actually makes the risk per person higher.) With this conservative estimate of beach-goers, there would be six deaths by sand per every 60 million beachgoers. One in 10 million beachgoers are likely to die in a sand hole.

That seems unlikely, but how unlikely? I’m still imaging the numerator. I’m seeing that one kid die, that teenager playing football who disappeared before his friends’ eyes. I’m thinking of his mother, grieving, and saying desperately in the story, “People have no idea how dangerous this is.”

I need something better, some kind of image. I found this oddly pretty graphic at the website for the National Safety Council (which hasn’t credited the designer):

Graphic from National Safety Council

Here, the size of each circle represents the relative likelihood of dying by the listed cause. One of every five deaths is from heart disease, so that circle is 20 percent of the overall circle (represented by the red arc), which represents chances of death by any cause, one in one. The stroke circle is about five times smaller than the heart disease one, as the risk is about one-fifth as common and just four percent of all deaths. The circles continue to get smaller down through rarer and rarer causes of death. The smallest circle on this particular map is death from fireworks discharge, a fate suffered by one of every 350,000 people just a few pixels on the screen.

On the above graphic, the circle for suffocation by sand would be almost 30 times smaller than the fireworks circle. Invisible, which is fitting since experts call such risks “vanishingly small.” What’s more, it would be nearly 10,000 times smaller than the circle for “drowning.” In other words, when I’m at the beach, I should be thousands of times more worried about the ocean than the sand–and even then, not too worried about either.

But I’m still not quite convinced. The problem with vanishingly small risks is they’re just that, hard to see, even in your mind. Once the circle disappears, it’s hard to understand how much smaller it’s getting.

So I need an image with which I can relate to all of the proportions. How about an Olympic-size swimming pool? If causes of death were represented by the 660,000 gallons of water in that pool, then heart attacks would take up 132,000 gallons, enough to fill about 15 tanker trucks. Death by fireworks discharge would take up roughly two milk jugs of the pool’s water.

And of all that water in the swimming pool, dying in a sand hole would take up about 17 tablespoons, less than nine ounces.

photo of kid on beach

I CAN’T CONTROL the rapid rise in risk disclosure, or the ubiquitous access to all of it. It’s probably not going to stop, and it’s unlikely that, in general, it will become more analytical. But now I know I can offset the cumulative effect it has on me, at least a little bit. I have that salve now. When dread creeps in, I can calm myself down. I can go to the beach without scanning for holes. I can let my kids play with their pails and shovels knowing they’ll be okay. Knowing that drowning in sand is, almost literally, nothing to worry about.