Americas

  • United States

Asia

Oceania

Contributor

Settling scores with risk scoring

Opinion
May 09, 20176 mins
CyberattacksData CenterInternet Security

Hackers don’t care if you reduce your risk score, nor will regulators or your customers, if you don't lower it by doing the right things

Risk scores seem all the rage right now. Executives want to know what their risk is. The constant stream over the past few years of high profile breaches and the resulting class action lawsuits, negative PR, loss in share price, cybersecurity insurance pay-out refusals, and even termination of liable executives has made this an urgent priority. The problem is we haven’t really developed a good way to measure risk.

Most risk score approaches are restricted by a very simple limitation: They are not vendor agnostic or universal. The solution used to calculate risk is limited by the data it collects, which can vary widely.  What is the risk score composed of? More important, what doesn’t it capture? One vendor will include only network and system vulnerabilities, another bundles application vulnerabilities into the mix, and yet another adds user behaviour. Agreeing on the “right” mix still eludes us with no real authoritative standards that define what should be included. Every scoring methodology is subjective, which is surely a sign of how inherently unscientific the entire approach is.

+ Also on Network World: What’s in a security score? +

There is a difference between a risk score based on the selective data that a vendor chooses to include and a true risk score. This may seem obvious, yet I have seen very few end user organizations evaluate how risk scores are calculated and whether a risk score has any expressive value during a proof of concept.

Of course, the other issue is that a consolidated risk score can skew the underlying results. If you base your risk score primarily on the Common Vulnerability Scoring System (CVSS), thousands of low-severity vulnerabilities can yield the same score as a few high-severity ones if the scoring algorithm is unsophisticated. More important, it can conceal  important information. A low risk score based on only a few critical severity vulnerabilities is misleading if those vulnerabilities are actively being exploited by a threat actor that is targeting you.

The greatest shortcoming of this model, however, is that a risk score does not really tell you whether you are in danger of being breached and what to do about it. In this sense, it is like taking the temperature of a patient: You can see that he has fever, but that can be a symptom of many ailments. To be truly useful, a temperature reading requires a diagnosis. In other words, a hypothesis of the root cause of the fever is needed to be able to treat it.

The religiously fervent believers in risk scoring will argue, “This is surely better than nothing.” Which is a sure fire indicator that they haven’t really understood the task. Our job is not to reduce risk on paper—it is to reduce real risk. And blindly relying on a numerical score that goes up and down without truly understanding why, will lead to only reducing risk on paper.

We are fighting paper dragons and letting the real monsters run amok. The best example of this is the what-if scenario: “How do I reduce the most amount of risk with the least amount of effort?” This usually translates into identifying how to reduce that number the most by applying the least amount of patches. Great on paper—5 percent risk reduction—which will surely be appreciated and well received in the investigation after your next breach.

Worse than that, though, is that it will cost time, effort and money to reduce that number but without truly diminishing any real-world risk. It’s the digital equivalent of bloodletting and leeches, the go to means of the medieval quack. Sure, you’ll get rid of the fever—but only because the patient is dying.

The only true use for a risk score

Being pragmatic, the only true use for a risk score is benchmarking. To be more specific, this means using peer data to determine what a good score is, at what score you are deemed secure, and at what score you are at a high risk of being breached. If you do not have that peer data, you are only benchmarking against yourself. You are essentially playing solitaire when in reality you have an adversary. You can at best determine whether you are subjectively better or worse than last month or last year.

Now, don’t get me wrong, that is not without its uses. But it’s also not really a risk score because it represents no actual risk at all.

So, a more nuanced approach is required. In a risk-centric approach, risk scores must be calculated against an actual risk based on a threat. At a high level, for example, what is our risk against generic phishing—which requires a variety of different attributes to be considered—installed applications, vulnerabilities, system configuration, behavior, exposure and function.

We can also get even more granular: Are we at risk against a specific malware or threat actor? In cases where we know what vulnerabilities are being targeted, we can assess whether we are susceptible or not—whether we are at risk or not. We can finally make that call. This can, of course, be exemplified as a score—and these scores can be consolidated into a single, high-level score. But the granularity and correlation to specific attack vectors, threats or threat actors are necessary to truly follow a risk-centric approach.

Most important, there has to be a concept of a “red flag.” If you are a healthcare provider, for example, and you have a vulnerability that ransomware aimed at your industry is actively targeting in your geography, it should have a bigger impact than just raising the score. It’s an imminent threat, and we should be hearing sirens.

Hackers don’t care if you have reduced your risk score, nor will regulators, the media, or your partners and customers—if you haven’t lowered it by doing the right things.  

Contributor

Oliver Rochford is VP of Security Evangelism at DF Labs. He is an expert on threat and vulnerability management as well as cyber security monitoring and operations management. He is author of the first and second editions of Hacking for Dummies in German and Dutch, and a former research director at Gartner, Inc. where he was lead analyst for threat and vulnerability management, security operations and also collaborated on SIEM and Managed Security Service reports.

The opinions expressed in this blog are those of Oliver Rochford and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.