Basic Guide to Days of Risk

Days-of-Risk (DoR) is a measurement of the time period of greatly increased risk from when a vulnerability has been publicly disclosed (and thus known and available to millions of script-kiddies and other malicious attackers) until a vendor patch is available to close the vulnerability.

That seems simple enough, and it can be, but in discussions with numerous people over the past few years, I've encountered several follow-on questions that may be interesting to a broader audience, so I've written this guide to provide a basic overview and introduce some of the considerations.

Definition: Days-of-Risk

First I'd like to start out with a definition.  I like the vulnerability lifecycle chart from Bruce Schneier's September 2000 Crypto-Gram Newsletter, because it illustrates key points where risk might increase dramatically.

Vulnerability Lifecycle

Essentially, days-of-risk as it is commonly used is the time from "Vulnerability announced" until the "Vendor patches vulnerability" points on Schneier's chart.  In the studies referenced here and in ongoing work, that is the primary metric discussed.

However, you can break out a couple of other time periods that are also interesting to security.  Frei, May, Fiedler and Plattner use the terms "black risk", "gray risk" and "white risk" in their Large Scale Vulnerability Analysis1, published at the SIGCOMM'06 Workshops in September 2006:

  • Black Risk is the time period from discovery to disclosure, when only a small closed group is aware of the vulnerability and able to exploit it.
  • Grey Risk is the time period commonly measured as the "days-of-risk", when the vulnerability is widely and publicly known within the security community, but a vendor patch or full mitigation is not yet available. 
  • White Risk is the time period after a patch or full mitigation is available, but before a user has applied it to his system(s).  I have also heard this time period referred to as "user days-of-risk", in reference to the fact that this is related to the user deployment process.

I believe the potential usefulness of days-of-risk (and related variations) as a metric is to monitor and make improvements to drive vendors to optimize towards reduced user risk.  Though days-of-risk is most obviously related to the vendor maintenance and security response process, it is worth noting that there are actions that can be taken to help reduce black risk and white risk as well:

  • While black risk would be hard to measure, if vendors are able to reduce their total number of vulnerabilities, it would naturally reduce the opportunities for vulnerability discovery.
  • White risk will be driven largely by user process, but vendor patch deployment tools and technology can also help reduce the period of white risk.

Background: Days of Hacker Recess

The first usage I am aware of in measuring days-of-risk didn't use the term, but instead referred to it as "days of hacker recess."  Back in January 2000, SecurityPortal.com analyst and founder Jim Reavis, asked the question "Linux vs. Microsoft: Who Solves Security Problems Faster?"  The article is still hosted2 here on the www.reavis.org web site, along with detailed tables.

A couple of things to note on this study:

  • It looked at all of the advisories from a vendor, not just for a particular product
  • It was done at the level of advisory, not per individual vulnerability.  For example, MS99-039 addressed two vulnerabilities.

The findings?  During 1999, Red Hat had 31 advisories with an average of 11.23 DoR, Microsoft had 61 advisories with an average of 16.1 DoR and Sun had 8 advisories with an average of 89.5 DoR.  In Mr. Reavis' words:

We think an entire year of data, while not conclusive, provides a fairly good indication that Open Source software can have its security vulnerabilities identified and repaired in a more timely manner than traditional closed source software.

Name Change: Days-of-Risk

It wasn't until four years later, in March of 2004, that Days-of-Risk re-entered the security discussion when Forrester Research published "Is Linux More Secure Than Windows?"  Forrester studied all security advisories from Debian, MandrakeSoft, Red Hat, SUSE and Microsoft over a one year period, extracting and cross-referencing vulnerabilities with entries in the Mitre CVE list.  Additionally, they gathered information from many public full-disclosure and security web sites and mailing lists to compile a database of when each vulnerability was "publicly disclosed" to a broad set of people.

Similar to the Reavis study, Forrester looked at all security advisories released by the companies and not just individual products (e.g. operating systems), but they went further and looked at individual vulnerabilities.  They also went through a data validation process with each vendor.

The study garnered a lot of discussion at the time because, contrary to the popular expectation, the results demonstrated that Microsoft customers experience fewer days of elevated risk from publicly disclosed vulnerabilities.  Here is a summary table:

Vendor

Days-of-Risk

Microsoft 25
Red Hat 57
Debian 57
MandrakeSoft 82
SUSE 74

In fact, the four Linux Distribution Vendors got together and issued a common statement regarding the Forrester report, which is still available on Novell.  Please read it yourself, but I believe the two core objections raised were:

  • "Each vulnerability gets individually investigated and evaluated; the severity of the vulnerability is then determined ... This severity is then used to determine the priority at which a fix for a vulnerability is being worked on ... This prioritization means that lower severity issues will often be delayed to let the more important issues get resolved first."
  • "For each vendor the report gives just a simple average, the "All/Distribution days of risk", which gives an inconclusive picture of the reality that users experience."
  • "The average erroneously treats all vulnerabilities as equal, regardless of the risk.

I will observe that the first objection "we prioritize high severity issues" should be true of non-Linux vendors as well, so it isn't clear how that explains any differences.  For the latter, I can agree that more sophisticated analysis than a simple average would illuminate even further.  However, if you could have only one measure, it'd probably be the average.

Other Works and Studies

The Forrester study put days-of-risk squarely on the map as a metric, so that there has been further work in the area since they published their comprehensive study in 2004.  My list is probably not comprehensive, but here are some of the key ones:

  • Microsoft commissioned Security Innovation's Hugh Thompson and Dr. Richard Ford to dig into product specific role-based comparisons of Web and Database Servers in 2005.  Among other metrics, they provided days-of-risk analysis for the cut-down servers they studied.
  • Mark Cox/Red Hat - http://people.redhat.com/mjc/metrics.html  Mark Cox and team have been publishing their underlying data since at least February 2005.  Summary pages are also auto-generated - here is one on RHEL4.  He also provides breakdowns by their Severity ratings.  Truly, Mark sets the bar for other Linux vendors when it comes to transparency and information.

Ongoing Research and Open Questions

I think I've hit the basics that will enable me to go on to some of the more interesting questions in future articles and posts.  Here are some of the ones I've been thinking about in terms of days-of-risk:

  • Do vendors really give higher priority to High severity issues?  Did the always?  Can we see a relationship between DoR studies and vendor improvement?
  • Do vendors treat all products the same?  Do they give higher priority to newer products?  Products of a certain type?
  • Some say that a better metric would be to measure how long a vendor takes to release a fix from when they know about, instead of when it is made public.  What are the differences in terms of security and customer risk?
  • How does one count the risk for vulnerabilities that were broadly disclosed before the product even shipped, but not fixed?  (Can't be risk before the product is released, can there?)
  • What if a vulnerability affects the shared code in different components?  Are they always fixed at the same time?  If not, when is the vulnerability considered "patched" ?

I look forward to your thoughts and comments and suggestions for other considerations with days-of-risk.

Regards ~ Jeff

Footnotes

1. Applications, Technologies, Architectures, and Protocols for Computer Communication; Proceedings of the 2006 SIGCOMM workshop on Large-scale attack defense; Pisa, Italy; Pages: 131 - 138; Year of Publication: 2006; ISBN:1-59593-571-1 pdf download

2. Original citation is J. Reavis, “Linux vs. Microsoft: Who Solves SecurityProblems Faster?” SecurityPortal, 17 Jan. 2000, www.securityportal.com/cover/coverstory20000117.html(current 11 July 2001).   However, www.securityportal.com is no longer active.

Copyright © 2007 IDG Communications, Inc.

Microsoft's very bad year for security: A timeline