Americas

  • United States

Asia

Oceania

The great IT risk measurement debate, part 1

Feature
Feb 28, 201113 mins
Data and Information SecurityROI and Metrics

IT risk—can it be measured, modeled, mitigated? How much data do we need? Experts Alex Hutton and Douglas Hubbard debate the finer points and reach some surprising and practical conclusions.

Risk evaluation models in IT are broken, but we can do more with available data than you might think by correcting for known errors in risk perception. Those are a few of the conclusions Alex Hutton and Doug Hubbard came to in their dissection of risk management. CSO Senior Editor Bill Brenner sat in on the conversation. Here are some highlights.

Update: Also see Part 2 of the discussion (posted 3/2/2011).

The players:

Alex Hutton is research and intelligence principal at Verizon Business and was previously CEO of Risk Management Insight.

Doug Hubbard is the author of The Failure of Risk Management and How to Measure Anything, and is the CEO of Hubbard Decision Research.

Doug Hubbard: Infosec is a very interesting subset of risk assessment and risk management in general. It falls in a category of disciplines that have developed risk management in isolation from what we now know about experimental psychology explanations of risk.

There are a lot of subjective estimates of risk in infosec, and now there’s decades of studies about the goofy, quirky things people do that affect our risk perceptions and our risk aversion.

For example, being around smiling people actually makes you more risk tolerant.

“Random, irrelevant events have much bigger impacts on our risk assessments and risk management than we probably like to believe”

Doug Hubbard

Alex Hutton: I would believe that.

Hubbard: Recent bouts of anger or fear change your risk aversion. So does your testosterone level, which changes daily, and maybe whether or not you’ve had your coffee, or how frustrating your commute was—that changes your perception of risk. So all of these random, irrelevant events have much bigger impacts on our risk assessments and risk management than we probably like to believe.

Read much more CSOonline coverage of critical issues in security metrics

Hutton: [Industry luminary] Dan Geer has said something to the effect that this is one of the most interesting fields to be in in our lifetime. Forgive me if I’m misquoting, but I agree because I see the culmination of risk management done properly as the application of science to the problem. Information risk management—it really is actually relatively unique in terms of security, because the technology changes more rapidly than physical security. Because the threats tend to be adaptive, we don’t have data sets yet like you do with car accidents, where year over year, despite changes in technology and so forth, it’s a fairly standard number that you can expect. All of the science and all the research has yet to be done.

Hubbard: I get to work in so many different industries on completely different kinds of problems. Right now I’m working on forecasting business opportunities for new pharmaceutical products. Last year I was doing business models for movie industry investments, and in the meantime, I worked on uranium mining, and fairly soon here we’ll do risk-return analysis for large airport development projects. All very difficult-to-measure sorts of things. But one of the most common things I hear from all my industry clients is that they’re unique among all industries, and I say, “Yes, you are unique and so is everybody else. In fact, you’re almost uniformly unique. I can actually name several industries that have these characteristics that you’re associating with this.”

If you talk about changing technology, one of the case studies I wrote about in my second book was space missions at NASA. In each case, they do these subjective analyses of the risk of new space missions and space projects, and they will insist in each case that since each mission is so unique and there’s new technologies, there’s no way you could apply historical data. But the historical models keep beating the human experts at forecasting cost overrun, schedule overrun and even risk of mission failure. Likewise, when it comes to movie projects, people say you can’t apply history there because each movie is so unique. Yes, that’s true, but you’ve been developing unique movies for many decades, so we have lots of data on that.

“You have more data than you think, and you need less data than you think, especially in IT.”

Doug Hubbard

I’m always telling people, “You have more data than you think, and you need less data than you think, especially in IT.” I think people are so used to having so much information in database form that can be queried that we think information that is not in that form isn’t measurable because it’s not data we’ve been collecting. So I’m always telling my IT clients that science was never about having data. It was always about getting data. That’s why we do controlled experiments and random samples.

Hutton: Getting it or modeling it, I will agree with you that the data exists. I think the data exists for a lot of IT problems if we break them down into discrete parts of a larger system. The problem with that is for most security professionals the system is extremely large.

I’m not saying we’re the only people with complex adaptive systems. But I will say that if there is a distribution of industries and problems, we’re probably one or two standard deviations away from the mean in terms of complexity.

Hubbard: I suppose a couple of other examples I could give is, I deal with a lot of biostatisticians who do analysis of Phase II and Phase III drug trials. Now, if I ask them, “What is your model for drug-human-body interaction?”—and I use “model” in the same sense I think you’re going for—they would say, “There isn’t one. We don’t really have a model.”

As I was diving into it further, every industry that I’m in is almost a brand-new industry to me, but they do something called pharmacokinetic modeling. That’s about the most advanced model there is, and it’s really almost like treating the human body like it’s a bag of solution and you dump some stuff in it and it diffuses through it. It gets a little bit more advanced than that, because it does have some arteries and so forth in it. They recognize that it’s a rudimentary, incomplete model, but the question is not whether or not the model is perfect or close to perfect; the question is whether it’s better than unaided human intuition.

One of the areas that a lot of people in all areas of management get into—but I see this especially in infosec—is they feel stumped by how to model something or measure something, and as a result of that, they reject modeling and measurement and go back to sort of unaided intuition. But we can show in so many different fields—there are about 150 studies that were gathered showing how simple historical models outperform human expertise in a variety of tasks.

For example, [if] you want to talk about unique situations, how about predicting the suicide risk of psychotherapy patients or predicting which medical-school applicants will outperform others? The historical models beat the humans.

“In infosec, our models are worse than rolling dice. Most of them involve multiplication of ordinal scales and things that break the fundamental laws of the universe.”

Alex Hutton

And, [using] my historical models for forecasting how much money movies were going to make, it wasn’t too hard to outperform the unaided human experts because when I correlated their original, unaided estimates against known actuals, the correlation was zero! So they could talk all they wanted about the difficulty of modeling and gathering the data. The problem is their current model was no better than you and I rolling dice.

Hutton: Ours are worse. In infosec, they’re worse than rolling dice. I think performance-wise, that’s been my experience—not that we have a lot of performance data on risk models to falsify that assertion. Most of them involve multiplication of ordinal scales and things that break the fundamental laws of the universe.

I’m at a point where the recommendations that I want to make are basically to tell CSOs to forget it; don’t even bother with G&R—meaning governance and risk, part of the governance, risk and compliance suite. Just focus on understanding visibility into your environment and variability within that.

There was a very large financial institution where I was kind of mentoring the guy in charge of the risk management program. They had a new CSO who’s a really sharp guy, and he had to sit down and listen because they needed to rebuild the risk management program from scratch. The different candidates were talking about it in the room, and one of them said, “We should pick up ISO 27005 and we should do these repeating plan/do/check/act cycles. We should work with an Ernst & Young or some accounting firm to implement our processes. We should hire four people, and our goal by the end of the year is to have 1,000 to 2,000 different risk statements done, and we’ll feed those to you, and you’ll know our risk.”

The guy I was mentoring had come to me the night before and said, “I really don’t know that what everybody else is doing is best for the company.” I said, “You’re probably right. A few guys are horribly siloed; if you just don’t have a configuration management database, if your data is all intuition or based on ad hoc, prior distributions because ‘that’s generally how things are in the industry,’ that’s one thing, but it might be better and more informative for the CSO to forget making risk statements and focus on data-gathering and making sure that’s structured. Work hard on making sure the inputs that you’ll eventually need are there, and that there are processes in place to make sure that those inputs are valid.” And that’s what he went to his CSO with. The CSO’s eyes lit up. It was new and it was unique; it wasn’t what he was supposed to hear from the risk management guys, but it was exactly what he wanted and needed.

Hubbard: I probably have to agree with him. When people make statements about what models apply or don’t apply, or what statistical methods apply or don’t apply, it’s often not as if they started from a position of knowledge about how complicated some of the problems are—where, say, Monte Carlo [simulations] are used, or historical models are used. Those are some very complicated models sometimes, but if the standard is not to have a perfect model but to simply outperform human intuition, then all we need to do is identify and remove certain known errors from human intuition.

“If you have 20 sources of error that you know about and you got rid of 10 of them, you have less uncertainty than you had before. So you don’t actually even have to have complete models.”

Doug Hubbard

If you have 20 sources of error that you know about and 100 that you don’t know about, and then you got rid of 10 of the ones that you knew about, you have less uncertainty than you had before. So you don’t actually even have to have complete models.

That’s why in pharmaceuticals, they don’t have to actually create a complete model of the human body to approve a drug. They have to test it empirically, but there are some basic interpretations of how to use new data. Now in infosec, somebody will say they don’t have enough data to do X. Well, there are actual calculations we can do to figure out what can be inferred from a given amount of data and how much data you would need for a given amount of uncertainty reduction, and how much a given amount of uncertainty reduction would be worth, and whether or not it’s more than the cost of that information. I never see anyone do those calculations before they claim that they don’t have the data to do something.

That’s the only way I know that we don’t have the data to do it. That’s why I’m always telling people, as I said, that they have more data than they think and they need less than they think.

It would be kind of rude for me to say this, but if somebody did say, “We don’t have the data for X,” what I feel like saying most of the time is, “Show me your math.” Because they don’t really know that. It’s not like they really sat down and did the math with the data they had to figure out how much of an uncertainty reduction they can get from it.

Hutton: It’s also very chicken-and-egg for us and infosec because, not to repeat myself, but people don’t understand the value of the data that they do have. You can capture billions of log events a day if you’re a big enough company, and that’s all data that’s useful in some context, but not all of it is useful in the right context, and a lot of people don’t know what the right contexts are.

To find sources of data-oriented security research, see The security data and survey directory

To make matters worse, in a sense our standards are a little more [undeveloped]. For example, I was talking with somebody who wouldn’t tell me their involvement with a particular standard, but I asked them, “Well, you’re making a risk management standard; how many times is the term ‘risk analysis’ used in the standard?” And he says, “Three.” And how big is the standard? Seventy pages. I said, “This is informative of something; you have essentially a process document with nothing to do with real risk management.”

“Our [standards-creating bodies] are rushing off to say what risk management is, and we’re going to create a bureaucracy that’s going to be difficult to change and adapt.”

Alex Hutton

The other problem is this: Twelve or 13 years ago, the first vulnerability scanners really started to show up, and they were so miraculous, such a great piece of technology, that they fundamentally changed the way that we looked at infosec. You know, I was lucky enough to be around before vulnerability scanners really took off and then after, and it was such a fundamental change in this industry. Risk analysis really cropped up out of trying to find out how serious that vulnerability is. And so it was asset-based—there’s an asset with vulnerability. It’s a very engineering-based view. Our [standards-creating bodies] are rushing off to say what risk management is, and we’re going to create a bureaucracy that’s going to be difficult to change and adapt.

There’s no model selection, there’s no applicability-update gathering. NIST 800-30 says you should understand some stuff about this, but it doesn’t tell you how to measure it, doesn’t tell you anything around meanings and how to understand if you have a significantly sizable population. None of our standards, not even certifications, tell us how to do these sorts of things. And yet we’re certifying practitioners left and right!

See Part 2 of the discussion.