• United States




Kenna Security takes a data-driven approach to risk analysis

Jul 03, 201810 mins
Risk ManagementSecurityVulnerabilities

Risk from security threats is relative to each company. Kenna Security leverages company and public data to pinpoint the real risk for each customer.

risk assessment gauge
Credit: Thinkstock

Should you be working harder to patch the huge, recent critical chip flaws like Spectre and Meltdown patching your browser or some other add-on like Adobe Flash that is currently causing problems?

I prize risk-based security analytics above all other computer security functions. I define risk-based security as using data from current and most likely future threats to inform defense and operationalizing to protect against those threats first. Not doing so is the biggest deficiency in most security programs. I believe so much in the concept that I wrote a whitepaper and book about it, and I'm dedicating the remaining 15 years of my professional career to helping companies do better risk analytics.

So, imagine my delight when I ran into a company dedicated to the same concept, Kenna Security, at the last Gartner Security and Risk Management Summit held this month near Washington, D.C.. Kenna Security helps customers prioritize and fix the highest risk vulnerabilities in their environment from among the millions they might be worried about.

Why risk analysis is necessary

According to almost every public computer vulnerability counter available, defenders are threatened by at least 7,000 to 15,000 new software vulnerabilities each year. That equates to about 11 per day, day-after-day, and it doesn't even include the tens of millions of new malware variants that emerge each year.

Computer defenders face a growing number of new risks every day, and they rarely go away. Almost every computer security hole ever announced has the potential to be a threat, even 10 years later. We are still detecting SQL Slammer and CodeRed worm attacks, even though they were supposedly put down in 2003 and 2001, respectively.

With so many attacks to worry about, it makes sense to concentrate on the ones that are you are currently facing or are most likely to face in the near future. This risk analysis should be made against every type of computer threat a company faces including software vulnerabilities that need to be patched, end-user training issues, strengthening authentication, secure configurations, physical security, and so on. If this sounds like a lot of work, you're already doing it. You just need to adjust what you really should be worried about the most and inform your actions with real threat data.

Risk analysis example: Patching

The typical computer environment has a lot of things to patch, possibly more than you care to think about. The figure below shows all the devices and major platform types that a typical enterprise environment has to worry about patching, including assets not under your control.

grimes figure 1 7 2 column patching scenarios

Did I miss something? (Thanks to Sam Newman for giving me the idea about all the things we need to patch.)

Security and complexity don't go well together, and you can see something as "simple" as patching is the definition of complexity. Add in over 7,000 new vulnerabilities each year and software patching mechanisms that are never 100 percent accurate. You're literally fighting a losing battle of attrition over time. The best strategy is to patch the vulnerabilities most likely to be exploited in your specific environment.

So, there might be 7,000 to 15,000 new software vulnerabilities a year, but the reality is that you can patch fewer than 1 percent of them and get almost all the benefit. I generated the figure below from a similar one that Kenna Security presented at its Gartner presentation.

focus on active exploits in your environment

Why Kenna Security gets risk-based vulnerability analysis right

Kenna Security is all about risk-based vulnerability analysis. Kenna's service takes in your vulnerability scanning software's findings and compares what you found in your own environment against a ton of global threat intelligence data. It prioritizes truly high-risk threats actively being exploited "in the wild" against your found vulnerabilities by matching them against multiple threat sources, including: Mitre's Common Vulnerability Exposures, Common Weakness Enumeration, or Common Platform Exposures.

Every vulnerability scanning program claims to prioritize risk. What they really do is repeat purported risk evaluations of individual threats taken directly from threat reports alone. Unfortunately, they use the exploit's ability to potentially cause critical harm as their only metric, and there is a huge gap between what could possibly cause harm to your environment and what will likely cause harm in your environment. For example, a "high-risk, critical" Linux vulnerability isn't all that worrisome in your environment if your environment doesn't contain any Linux devices.

Even if your environment contains Linux device, if the exploit does not have a known public exploit and has never been previously executed "in-the-wild," it's far less likely to be a threat to your organization–even then, only if your Linux devices are used in mission-critical roles or host critical data. Suppose your only Linux devices are running the cafeteria soda machines.

Kenna Security says 77 percent of publicly known exploits have no observed or published exploit, and only 0.6 percent of CVEs have exploits executed "in-the-wild." That's a huge decrease in risk for most vulnerabilities. If you add the fact that your environment might not even be vulnerable to those in-the-wild exploits (perhaps your firewalls block the ports necessary to allow the threat to be successful), you can begin to see that what you really need to worry about the most is a far smaller subset. Kenna Security covers this concept well in its Prioritization to Prediction whitepaper

I spoke about Kenna Security's service and risk-based analysis with Jonathan Cran, head of research at Kenna Security.  He told me that Kenna's process starts with discovering all the client's devices to be managed. This is done by importing the client's relevant asset data, including from vulnerability scanning results and Nmap scans, from an API connection or as part of a CSV file.

Then the client pulls in its own specific vulnerability data from scans against its own devices. The ways the client can import their local vulnerability results and the global threat feeds are expanding all the time. As Cran put it, “Kenna is built integration-first and directly connects to over 35 vulnerability scanners to pull in asset and vulnerability information. We can pull data from any source if it’s useful data...and give our customers a clear picture of their reality based on that data."

Kenna Security then uses the customer's data, cross-referencing and analyzing against an array of global threat feeds against dozens of risk analytics metrics, such as what is being executed in-the-wild, what's been weaponized into malware, etc., to deliver an informed threat score. The threat score is calculated on individual assets, on any custom group of assets the customer wants to collect under one label, and for all assets as a whole. The score is tracked over time. In Kenna's security model, threats with a risk score over 660 are "high" and should be addressed immediately before other lesser threats.

Kenna says its model is even helping developers and application owners to better self-service their own applications and infrastructure, taking some of the pressure off of the infrastructure patching team. Cran says, "Management can say, "Go patch everything that has a score of 660 or higher. And when you do that you're in compliance." They have over 350 customers now, including many Fortune 500 clients.

In my mind there is one crucial piece of the puzzle missing, and it's the client's reporting of actual local compromises. When you upload vulnerability scanning data, it doesn't tell you which asset has been or is currently actively compromised. That's a big piece of the focused risk puzzle. I asked Cran if Kenna was looking at that component.

Cran says, "Absolutely. Customers are starting to ask about local context. We are currently beta testing bringing SEIM data into the platform and allowing it to be used in the prioritization model. The future is in allowing the local event data from intrusion detection and endpoint protection software to be incorporated."

I think Kenna has the best chance of pulling off that ultimate dream. No one that I know of is currently doing it better. Lately, I've really been impressed by a bunch of newer companies I've recently learned about, and you'll be hearing about them over the next few months. It's going to be hard to beat Kenna. I truly believe every company should be using its capabilities.

Patch Meltdown or something else?

So, to answer the question presented at the very beginning of this column, should you patch the critical chip flaws first or some other more regular software flaw? If you look at the current data, where Meltdown and Spectre has yet to be exploited in the wild, you probably shouldn't be as worried about them as most of the world seems to want you to be.

What if the risk analysis Is wrong? What if I (or Kenna) tell you that Meltdown and Spectre aren't as important as other things to patch, and relying on that advice you ignore it, and then your company gets owned by one of the chip flaws? Well, that is a possible outcome.

The whole idea is that risk analytics is trying to help you concentrate on the most likely, most important things, based on your own data, without relying just on gut feelings. Yes, it's possible that you could be compromised by some lesser ranked risk. It does happen. What am I (or Kenna) supposed to do, tell you to concentrate more on things the data says is less likely to threaten your environment? That would be crazy.

Risk is called risk for a reason. With any control and action you always end up with a portion of risk you can't control. So, you can either try to see if additional controls can be instituted at an appropriate cost, make some other party responsible for the risk (e.g., cybersecurity insurance), or you just accept it (i.e. residual risk). You're already doing that for every threat and control you have. Why not have good data about your own local experiences to back up what you concentrate on first and most?

We're not saying to neglect Meltdown and Spectre. They could be big future threats. We're just saying that in absence of other good data to suggest differently, concentrate on the threats that are likely to cause the most harm now.

If you don't already have good localized data and threat intelligence expertise to help you rank the threats that your environment should be worried about the most, you should check out Kenna Security.

Fight the good fight!


Roger A. Grimes is a contributing editor. Roger holds more than 40 computer certifications and has authored ten books on computer security. He has been fighting malware and malicious hackers since 1987, beginning with disassembling early DOS viruses. He specializes in protecting host computers from hackers and malware, and consults to companies from the Fortune 100 to small businesses. A frequent industry speaker and educator, Roger currently works for KnowBe4 as the Data-Driven Defense Evangelist and is the author of Cryptography Apocalypse.

More from this author