CSO Disclosure Series | Reporter's Notebook: The United States of TMI

Lead paint in toys. Brain-eating amoeba. Identity theft. Drowning in sand. We know more than ever about the risks all around us. Do we know what disclosing them all is doing to us?

I’D LIKE TO SAY that the writing that had the most profound effect on me this year was some classic novel I picked up in my spare time, but in fact it was an Associated Press article. Last June, AP Medical Writer Mike Stobbe wrote a fascinating, harrowing story about large holes dug in beach sand that can collapse "horrifyingly fast" and cause a person in the hole to drown. Stobbe described one case when a teenager ran back to catch a football, fell in a hole and disappeared under a cascade of sand. When his friends approached to help, more sand caved over him. He was buried for at least fifteen minutes and eventually suffocated. Stobbe discloses in the article that, while they’re virtually unheard of, collapsing sand holes are actually more common than "splashier threats" like shark attacks.

Unfortunately, I read the story right before going on vacation with my family, to the beach. Sometimes, the story trespassed on my mind. I found myself scanning the beach for holes left behind by beachgoers who didn’t know about the monster that lived in the sable but unstable sand. I wondered why I would voluntarily give my kids shovels and pails--the very tools of their demise. I’m actually worried about the beach--the beach!--swallowing up my kids.

And that’s not all I’m worried about. After a summer of sand terror, and tracking mosquitoes with Triple-E and dead birds with West Nile Virus, I fretted to see a constant stream of headlines like ... Brain-eating amoeba kills 6 this year. Drugmakers recall infant cough/cold medicine; ConAgra shuts down pot pie plant because of salmonella concerns; Listeria precaution prompts recall of chicken & pasta dish.

Then came the Great Lead Recalls of 2007, when parents learned that everything from toys to tires are laden with toxic heavy metal. Oh, and my toothpaste might have a chemical in it that’s usually found in antifreeze and brake fluid.

Also, MRSA, the so-called superbug that resists antibiotics, is "more deadly than AIDS" and a new strain of adenovirus means that now the common cold can kill me. Also, my Christmas lights have lead in them. Finally, I found Boston.com’s page called "Tainted Food, Tainted Products" where I could track all of the products that were potentially deadly to me, including everything from mushrooms containing illegal pesticides to lead-bearing charity bracelets. Charity bracelets!

It’s enough to make you want to hide from the world in your basement--provided of course you’ve tested it for excessive levels of radon.

photo of bucket of sand

IN MANY WAYS, 2007 was The Year of Disclosure.

When this idea first came to me, I wasn’t thinking about the sand. I was thinking about information security, as I was writing a reasonably disheartening story about serious malware threats while also researching dozens of the thousands of data breach disclosure letters that were issued this year now that 38 states have disclosure laws.

But then, throughout the fall, I started to notice that risk disclosure was becoming one of those news phenomena that eventually earns its own graphic and theme music on cable news. It earned landing pages on Web sites with provocative names like “Tainted Food, Tainted Products.”

It feels like there’s more risk disclosure than ever before--an endless stream of letters about identity theft, disclaimers in drug commercials, warnings on product labels, recalls and, of course, news stories.

But it’s not just the volume of disclosure but also its changing nature that’s wearing me down. Disclosure is more pre-emptive than ever. We know about risks before they’re even significant. Many of the state data breach disclosure laws, for example, mandate notification at the mere possibility your private information has been compromised.

Even more bizarre and stressful, disclosure is becoming presumptive. The cough medicine recall, for example, involved a product that a consumer advocate said was safe when used as directed. (ConAgra’s pot pie shut down also involved a product that company officials declared posed no health risk if cooked as directed). The risk that forced cough syrup off the shelves was that if you give a child too much medicine, it could lead to an overdose, which seems reflexively obvious. Essentially the disclosures amounted to: Not following directions is dangerous.

Perhaps the most insidious change is with the rare but spectacular risks. The sensational tales of brain-eaters and sand killers. Such stories have always existed, of course, but something is different now, and that’s the Internet. Ubiquitous access combined with the bazaar potential publishers means the freakiest event can be shared by millions of people. Anyone can read about it, blog about it, link to it, forward it in e-mail, and post it as a Flash video, but there’s no impetus for them to disclose the risk responsibly or reasonably. Their agenda may even call for them to twist the truth, make the risk seem more or less serious than it is.

Here’s the paradox that rises from all of this: As an individual and consumer, I like disclosure. I want every corporate and civic entity I place trust in to be accountable. I want journalists and scientists to unearth the risks I’m not being told about. At the same time, while any one disclosure of a threat may be tolerable, or even desirable, the cumulative effect of so much disclosure is, frankly, freaking me out.

So I started to wonder, at what point does information become too much information? Is more disclosure better, or is it just making us confused and anxious? Does it enable us to make better decisions, or does it paralyze us? What do the constant reminders of the ways we’re in danger do to our physical and mental health?

To answer these questions, I sought out two leading experts on risk perception and communication: Baruch Fischoff and Paul Slovic, both former presidents of the Society of Risk Analysis. I told them that I wanted to better understand risk perception and communication, the effect of ubiquitous access to risk information, and what we could do about this disclosure paradox.

But really I was hoping for some salve. Some way to stop worrying about sand holes at the beach.

"IT’S A REALLY DIFFICULT topic," says Baruch Fischoff. "On the one hand you want disclosure, because it affirms that someone is watching out for these things and that the system is catching risks. But on the other hand, there’s so much to disclose that it’s easy to get the sense the world is out of control."

Little research exists on the physical health effects of any risk disclosure, never mind the cumulative effects, although media saturation is being blamed for increased anxiety, stress and insomnia--gateways to obesity, high blood pressure, depression and other maladies. But the mental health effects of so much disclosure are reasonably well understood. Research suggests that it’s not only unproductive, but possibly counterproductive.

To understand how, I was sent to look up research from the late 1960s, when some psychologists put three dogs in harnesses and shocked them. Dog A was alone and was given a lever to escape the shocks. Dogs B and C were yoked together; Dog B had access to the lever, but Dog C did not. Both Dog A and Dog B learned to press the lever and escape the shocks. Dog C escaped with Dog B, but he didn’t really understand why. To Dog C the shocks were random, out of his control. Afterward, the dogs were shocked again, but this time they were alone and each was given the lever. Dog A and Dog B both escaped again, but Dog C did not. In fact, Dog C curled up on the floor and whimpered.

After that, the researchers tested the idea with positive reinforcement, using babies in cribs. Baby A was given a pillow that controlled a mobile above him. Baby B was given no such pillow. When both babies were subsequently placed in cribs with a pillow that controlled the mobile, Baby A happily triggered it; Baby B didn’t even try to learn how.

Psychologists call this behavior "learned helplessness"--convincing ourselves that we have no control over a situation even when we do. The experiments arose from research on depression, and the concept has also been applied with regards to torture. It also applies to risk perception. Think of the risks we learn about every day as little shocks. If we’re not given levers that reliably let us escape those shocks (in the form of putting the risk in perspective or giving people information or tools to offset the risk, or in the best case, a way to simply opt out of the risk), then we become Dog C. We learn, as Fischoff said, that the world is out of control. More specifically, it is out of our control. What’s more, sociologists believe that the learned helplessness concept transfers to social action. It not only explains how individuals react to risk, but also how groups do.

MY FAVORITE LEARNED HELPLESSNESS experiment is this one: People were asked to perform a task in the presence of a loud radio. For some, the radio included a volume knob, while for others no volume knob was available. Researchers discovered that the group that could control the volume performed the task measurably better, even if they didn’t turn the volume down. That is, just the idea that they controlled the volume made them less distracted, less helpless and, in turn, more productive.

Control is the thing, both Fischoff and Slovic say. It’s the countervailing force to all of this risk disclosure and the learned helplessness it fosters.

We have many ways of creating a sense of control. One is lying to ourselves. "We’re pretty good at explaining risks away," says Slovic. "We throw up illusory barriers in our mind. For example, I live in Oregon. Suppose there’s a disease outbreak in British Columbia. That’s close to me, but I can tell myself, ’that’s not too close’ or ’that’s another country.’ We find ways to create control, even if it’s imagined." And the more control--real and imagined--that we can manufacture, Slovic says, the more we downplay the chances a risk will affect us.

Conversely, when we can’t create a sense of control over a risk, we exaggerate the chances that it’ll get us. For example, in a column (near the bottom), Brookings scholar Gregg Easterbrook mentions that parents have been taking kids off of school buses and driving them to school instead. Part of this is due to the fact that buses don’t have seat belts, which seems unsafe. Also, bus accidents provoke sensational, prurient interest; they make the news far more often than car accidents, making them seem more common than they are.

Yet, buses are actually the safest form of passenger transportation on the road. In fact, children are 8 times less likely to die on a bus than they are in a car, according to research by the National Highway Traffic Safety Administration (NHTSA). That means parents put their kids at more risk by driving them to school rather than letting them take the bus.

Faced with those statistics, why would parents still willingly choose to drive their kids to school? Because they’re stupid? Absolutely not. It’s because they’re human. They dread the idea of something out of their control, a bus accident. Meanwhile, they tend to think they themselves won’t get in a car accident; they’re driving.

photo of school bus

DREAD IS A POWERFUL force. The problem with dread is that it leads to terrible decision-making.

Slovic says all of this results from how our brains process risk, which is in two ways. The first is intuitive, emotional and experience based. Not only do we fear more what we can’t control, but we also fear more what we can imagine or what we experience. This seems to be an evolutionary survival mechanism. In the presence of uncertainty, fear is a valuable defense. Our brains react emotionally, generate anxiety and tell us, “Remember the news report that showed what happened when those other kids took the bus? Don’t put your kids on the bus.”

The second way we process risk is analytical: we use probability and statistics to override, or at least prioritize, our dread. That is, our brain plays devil’s advocate with its initial intuitive reaction, and tries to say, “I know it seems scary, but eight times as many people die in cars as they do on buses. In fact, only one person dies on a bus for every 500 million miles buses travel. Buses are safer than cars.”

Unfortunately for us, that’s often not the voice that wins. Intuitive risk processors can easily overwhelm analytical ones, especially in the presence of those etched-in images, sounds and experiences. Intuition is so strong, in fact, that if you presented someone who had experienced a bus accident with factual risk analysis about the relative safety of buses over cars, it’s highly possible that they’d still choose to drive their kids to school, because their brain washes them in those dreadful images and reminds them that they control a car but don’t control a bus. A car just feels safer. "We have to work real hard in the presence of images to get the analytical part of risk response to work in our brains,” says Slovic. “It’s not easy at all."

1 2 Page 1
Page 1 of 2
7 hot cybersecurity trends (and 2 going cold)