Calculating the cost of an information security incident A veteran CIO of a new york city-based financial services company learned in july 2002 that several vital files had vanished from one of his company's 25 servers. An employee had tried to find some information and failed. That's when IS discovered that there was, in fact, no company information on that particular server at all. Panicked, the CIO and his staff went into emergency mode. They soon discovered that a hacker had found his way through their firewall and wiped out all the production files on the server, leaving chaos and a couple of strangely labeled files in his wake. Two frantic days
All told, the CIO (who spoke on condition that his name not be used) reported that the breach cost the company $50,000. But when asked how he came up with that number, he said he honestly couldn't say. Because he really wasn't sure.
"We didn't do a line-by-line breakdown of the costs because it didn't seem necessary at the time," he admits. "But consultant costs, loss of production time and overtime for the IT staff were part of it."
Even if CISOs can quantify the cost of a breach, few executives will talk on record about it. Companies have an incentive to downplay, or downright hide, such information. "It's embarrassing to admit that a hacker got through your firewall," says Tina LaCroix, CISO of Aon, an insurance provider. "Most companies won't give out the real information [about breaches]. They don't want you to know they have vulnerabilities because they make the CSO look bad."
"No one wants to be the company on the front page of The New York Times," says Thomas Varney, a director of technology assurance and security, who spoke on the condition that his Fortune 100 company not be named. But ignoring vulnerabilities won't make them go away. Every day (or so it seems), another consultancy reports dire new statistics on the cost of security failures. According to the 2002 computer crime and security survey from the Computer Security Institute and the FBI, 80 percent of the 503 security practitioners surveyed acknowledged financial losses due to cybersecurity incidents, but only 44 percent were willing (or able) to quantify losses.
While circling the wagons is understandable, it's also counterproductive for the industry as a whole. "The bottom line is that CSOs are doing a pitiful job of tracking breach costs," says Michael Erbschloe, associate senior research analyst at Computer Economics, an IT investment consultancy. "They don't want to go public with the costs or even talk about it internally. The rationale is that, if CSOs don't know the numbers, no one else will either, which cuts down on the likelihood that their company's reputation or stock price will take a hit." But he cautions, "CSOs need to wake up. Start sharing data, or we'll all be more vulnerable than we'd like.
"Every breach is different, and costs will vary from incident to incident. That's why it's incumbent upon the CSO to have an incident-response plan in place prior to a breach."
Creating a methodology for quantifying as many costs associated with a breach as possible is essential. Start by determining the value of your information and assets so that you can more easily find out what you lost. Break the incident down into every conceivable category because, inevitably, it has all been affected.
Hard costs
That's why cyberinsurance is a tough area, says Rich Mogull, research director at GartnerG2 Cross-Industry Research. Companies lack the solid actuarial formulas that enable them to figure out risks over time, so they underprotect
"The ISP wanted to know why we were making so many SQL calls, so I got suspicious," Woerner recalls. "I asked him to block all our SQL calls to the Internet, since it's not a critical method of connection for us. Then I contacted our administrator for that particular system and confirmed that we were infected. At that point, I alerted our incident-response team, but I only put them on alert. The situation seemed under control, and we didn't want to go overboard with our response. I updated our virus scanner on the infected system, found four files associated with the worm and removed them. We rebooted the server, did a sweep so everything was clean, and made sure our switch was configured to block the SQL port from our box to the Internet to prevent reinfection."
The whole incident took two hours to handle. Since it was a relatively minor attack and Woerner had a detailed incident-response plan in place, he was able to track the breach cost easily. The worm had infected an internal server, and during the downtime necessary to contain the infection, 15 employees were unable to do work on their computer. "Average pay for those workers was $25 an hour; they were out for two hours, so I figure it cost about $750," he says.
The incident's relatively small size doesn't diminish its importance as an example of why adding up the numbers can pay off in the end. Woerner took the $750 number to his CIO and used it to demonstrate the need for a security budget and the necessity of taking preventive, instead of defensive, action. If the password on the SQL application had been changed from the default or if the SQL port had been blocked, he points out, it would have taken only 10 minutes instead of 30 hours of work time away from the employees
Because no data or system was seriously corrupted, Woerner had to consider only system and worker downtime, two of the most basic considerations when attempting to quantify the cost of a breach. But it can quickly get more complicated (see "Criteria for Determining the Cost of a Breach," this page).
Woerner says he could have padded the breach's cost to underline his argument to the CIO, "but if you inflate the cost, it will come back to bite you," he says. Legal EaglesThe industry's lack of a consistent model for calculating security losses often results in inaccurate loss estimates, "numbers that never would hold up in a court of law," says Varney, who spent years doing computer forensics with the Department of Defense and the Secret Service. "A company calls up and says, 'We've just been hacked. We've lost $1 million.' They pull a number out of the air," he says. "I ask how they got that number, and it turns out they're just guessing."
Varney says many CSOs don't realize loss estimates are not enough to prosecute security offenders. "If the amount varies from what the prosecution presents, the defense will poke holes all over your case," he says.
Law enforcement has minimum monetary damage requirements for prosecuting a security case. The amount depends on the jurisdiction, Varney says, but it can range from $500 to $500,000. The numbers must be carefully catalogued, and prosecutors must be able to prove them. Otherwise, a lawsuit might not go the way you think it should.
Case in point: In September 2001, a jury found Herbert Pierre-Louis guilty under the Computer Fraud and Abuse Act for launching a virus attack on four offices of Purity Wholesale Grocers in 1998. According to Purity, the virus shut down operations for a week and caused at least $75,000 in damage, well over the $5,000 minimum. But in April, a federal judge threw out the conviction because the jury ruled that the virus didn't cause enough damage to rate as a federal crime. The breach occurred before the Act was amended in 2001 to cover lost revenue from suspended operations and repair costs from interrupted service, and thus the damages as defined by the law did not total $5,000. Pierre-Louis's conviction was nullified.
Trying to nail a hacker is just the beginning. The concept of downstream liability is also a concern, says Aon's LaCroix. These days, viruses jump from company to company. If a company is deemed negligent in deploying adequate security, there's a potential for third-party lawsuits from others affected afterward. "You are no longer responsible for just your own security," LaCroix says.
Ask Ziff Davis Media. Deficient security and privacy protections cost the publishing company at least $125,000 in August 2002 when an online subscription promotion exposed subscriber information, including credit card data, to public view. Several subscribers subsequently became the victims of identity theft. In a settlement with the New York state attorney general, Ziff Davis agreed to pay a total of $100,000 to three state governments, as well as $25,000 in compensation to 50 customers whose credit card data was bared during the incident. If all 12,000 subscribers whose information was revealed had provided credit card data to the company, the settlement could have reached $18 million, according to John Pescatore, an analyst with Gartner Research.
Until someone comes up with a way to prevent breaches from happening at all
"We learned one lesson really well," says the anonymous CIO of the New York financial services firm. "Understanding what you're spending on security cannot be overrated."