Days-of-Risk\u00a0(DoR) is a measurement of the time period of greatly increased risk from when a vulnerability has been publicly disclosed (and thus known and available to millions of script-kiddies and other malicious attackers) until\u00a0a vendor patch is available to close the vulnerability.That seems simple enough, and it can be, but in discussions with numerous people over the past few years, I've encountered several follow-on questions that may be interesting to a broader audience, so I've written this guide to provide a basic overview and introduce some of the considerations.Definition: Days-of-RiskFirst I'd like to start out with a definition.\u00a0 I like the vulnerability lifecycle chart from Bruce Schneier's September 2000 Crypto-Gram Newsletter, because it illustrates key points where risk might increase dramatically.Essentially, days-of-risk as it is commonly used is the time from "Vulnerability announced" until the "Vendor patches vulnerability" points on Schneier's chart.\u00a0 In the studies referenced here and in ongoing work, that is the primary metric discussed.However, you can break out a couple of other time periods that are also interesting to security.\u00a0 Frei, May, Fiedler and Plattner use the terms "black risk", "gray risk" and "white risk" in their Large Scale Vulnerability Analysis1, published at the SIGCOMM'06 Workshops in September 2006: Black Risk is the time period from discovery to disclosure, when only a small closed group is aware of the vulnerability and able to exploit it. Grey Risk is the time period commonly measured as the "days-of-risk", when the vulnerability is widely and publicly known within the security community, but a vendor patch or full mitigation is not yet available.\u00a0 White Risk is the time period after a patch or full mitigation is available, but before a user has applied it to his system(s).\u00a0 I have also heard this time period referred to as "user days-of-risk", in reference to the fact that this is related to the user deployment process. I believe the potential usefulness of days-of-risk (and related variations) as a metric is to monitor and make improvements to drive vendors to\u00a0optimize towards reduced user risk.\u00a0 Though days-of-risk is most obviously related to the vendor maintenance and security response process, it is worth noting that there are actions that can be taken to help reduce black risk and white risk as well: While black risk would be hard to measure, if vendors are able to reduce their total number of vulnerabilities, it would naturally reduce the opportunities for vulnerability discovery. White risk will be driven largely by user process, but vendor patch deployment tools and technology can also help reduce the period of white risk. Background: Days of Hacker RecessThe first usage I am aware of in measuring days-of-risk\u00a0didn't use the term, but instead referred to it as "days of hacker recess."\u00a0 Back in January 2000,\u00a0SecurityPortal.com analyst and founder Jim Reavis, asked the question "Linux vs. Microsoft: Who Solves Security Problems Faster?"\u00a0 The article is still hosted2 here on the www.reavis.org\u00a0web site, along with detailed tables.A couple of things to note on this study: It looked at all of the advisories from a vendor, not just for a particular product It was done at the level of advisory, not per individual vulnerability.\u00a0 For example, MS99-039 addressed two vulnerabilities. The findings?\u00a0 During 1999, Red Hat had 31 advisories with an average of 11.23 DoR, Microsoft had 61 advisories with an average of 16.1 DoR and Sun had 8 advisories with an average of 89.5 DoR.\u00a0 In Mr. Reavis' words:We think an entire year of data, while not conclusive, provides a fairly good indication that Open Source software can have its security vulnerabilities identified and repaired in a more timely manner than traditional closed source software.Name Change: Days-of-RiskIt wasn't until four years later, in March of 2004, that Days-of-Risk re-entered the security discussion when Forrester Research published "Is Linux More Secure Than Windows?"\u00a0 Forrester studied all security advisories from Debian, MandrakeSoft, Red Hat, SUSE and Microsoft over a one year period, extracting and cross-referencing vulnerabilities with entries in the Mitre CVE list.\u00a0 Additionally, they gathered information from many public full-disclosure and security web sites and mailing lists to compile a database of when each vulnerability was "publicly disclosed" to a broad set of people.Similar to the Reavis study, Forrester looked at all security advisories released by the companies and not just individual products (e.g. operating systems), but they went further and looked at individual vulnerabilities.\u00a0 They also went through a data validation process with each vendor.The study garnered a lot of discussion at the time because, contrary to the popular expectation, the results demonstrated that Microsoft customers experience fewer days of elevated risk from publicly disclosed vulnerabilities.\u00a0 Here is a summary table: Vendor Days-of-Risk Microsoft 25 Red Hat 57 Debian 57 MandrakeSoft 82 SUSE 74 In fact, the four Linux Distribution Vendors got together and issued a common statement regarding the Forrester report, which is still available on Novell.\u00a0 Please read it yourself, but I believe the two core objections raised were: "Each vulnerability gets individually investigated and evaluated; the severity of the vulnerability is then determined\u00a0... This severity is then used to determine the priority at which a fix for a vulnerability is being worked on\u00a0... This prioritization means that lower severity issues will often be delayed to let the more important issues get resolved first." "For each vendor the report gives just a simple average, the "All\/Distribution days of risk", which gives an inconclusive picture of the reality that users experience." "The average erroneously treats all vulnerabilities as equal, regardless of the risk."\u00a0 I will observe that the first objection "we prioritize high severity issues" should be true of non-Linux vendors as well, so it isn't clear how that explains any differences.\u00a0 For the latter, I can agree that more sophisticated analysis than a simple average would illuminate even further.\u00a0 However, if you could have only one measure, it'd probably be the average.Other Works and StudiesThe Forrester study put days-of-risk squarely on the map as a metric, so that there has been further work in the area since they published their comprehensive study in 2004.\u00a0 My list is probably not comprehensive, but here are some of the key ones: Microsoft commissioned Security Innovation's Hugh Thompson and Dr. Richard Ford to dig into product specific role-based comparisons of Web and Database Servers in 2005.\u00a0 Among other metrics, they provided days-of-risk analysis for the cut-down servers they studied. Mark Cox\/Red Hat - https:\/\/people.redhat.com\/mjc\/metrics.html\u00a0 Mark Cox and team have been publishing their underlying data since at least February 2005.\u00a0 Summary pages are also auto-generated - here is one on RHEL4.\u00a0 He also provides breakdowns by their Severity ratings.\u00a0 Truly, Mark sets the bar for other Linux vendors when it comes to transparency and information. Ongoing Research and Open QuestionsI think I've hit the basics that will enable me to go on to some of the more interesting questions in future articles and posts.\u00a0 Here are some of the ones I've been thinking about in terms of days-of-risk: Do vendors really give higher priority to High severity issues?\u00a0 Did the always?\u00a0 Can we see a relationship between DoR studies and vendor improvement? Do vendors treat all products the same?\u00a0 Do they give higher priority to newer products?\u00a0 Products of a certain type? Some say that a better metric would be to measure how long a vendor takes to release a fix from when they know about, instead of when it is made public.\u00a0 What are the differences in terms of security and customer risk? How does one count the risk for vulnerabilities that were broadly disclosed before the product even shipped, but not fixed?\u00a0 (Can't be risk before the product is released, can there?) What if a vulnerability affects the shared code in different components?\u00a0 Are they always fixed at the same time?\u00a0 If not, when is the vulnerability considered "patched" ? I look forward to your thoughts and comments and suggestions for other considerations with days-of-risk.Regards ~ JeffFootnotes1. Applications, Technologies, Architectures, and Protocols for Computer Communication; Proceedings of the 2006 SIGCOMM workshop on Large-scale attack defense; Pisa, Italy; Pages: 131 - 138; Year of Publication:\u00a02006; ISBN:1-59593-571-1 pdf download2. Original citation is J. Reavis, \u201cLinux vs. Microsoft: Who Solves SecurityProblems Faster?\u201d SecurityPortal, 17 Jan. 2000, www.securityportal.com\/cover\/coverstory20000117.html(current 11 July 2001).\u00a0\u00a0 However, www.securityportal.com is no longer active.