The great IT risk measurement debate, part 2

IT risk—can it be measured, modeled, mitigated? Part two of Alex Hutton and Douglas Hubbard's discussion covers likelihood statements, the placebo effect on risk perception, and much more.

1 2 Page 2
Page 2 of 2

I call it the fallacy of close analogy: Thinking that you have to have really identical situations to compare this to, and since each situation is unique, we feel like we can't learn anything by looking at historical data. What I tell people is, no, let's do the math and see whether or not, just using the data that you have, the math outperforms the intuition of individuals.

If there's a 10 percent chance per year of some event, we don't have to wait around for 10 years. We look at all the times that this person said something was 10 percent likely—they might have made 200 different predictions where they said something was 10 percent likely. Out of those 200, about 20 of them, plus or minus some statistically allowable error, should have come to fruition. Each single data point is not the size of data that we're limited to. We look at all of that individual's predictions. We're asking the question, "Is that person calibrated?" We're measuring that person's skill at applying subjective probability assessments. That's what we're really measuring.

Hutton: Not to be self-serving, but I did want to circle back to two concepts that you mentioned in solving that. The first was the size of the data—are you unique, and so forth—and one of the problems in our industry is data sharing­. And then you mentioned breaking the larger system down into components and using evidence based on the components to suggest a more accurate outcome [about the] totality. This is one of the great things that I found when I joined Verizon a couple of years ago—this was their direction, the culmination of which has been our best community effort and the data-breach report. Dr. Tippett and Wade Baker [director of risk intelligence at Verizon Business] and my group were trying to foster data sharing and give people comparative analytics while respecting privacy—in some cases, even anonymity, although that adds a lot of uncertainty to the data. The outcomes are very component-based. It's not very different from FAIR (Factor Analysis of Information Risk model), not very different from ISO 27005. Anyway, just a little plug for what we're trying to do, and that's in the New School blog.

Also see IT risk assessment frameworks: Real-world experience

Other folks, like Trustwave, have databases that are a great source of information. [Editor's note: See CSO's security data and survey directory.] A few things do exist, but again, it is up to our modelers to understand and create context. Unfortunately, these people are being told, "Go multiply these ordinal scales together, do that 40,000 times, and you've done your enterprise risk assessment for the year. Thanks very much—give us our $750,000 [fee]."

Hubbard: Plus the other thing is, I think modeling in the IT sense means something different than modeling in the empirical sciences sense, and I think that what you want is to use the term "modeling" as in the empirical sciences.

Copyright © 2011 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
21 best free security tools to make your job easier