Recently, Red Hat has raised some objections to my use in analysis of the High, Medium and Low\u00a0severity ratings as determined by the National Institute of Standards (NIST) for the National Vulnerability Database (NVD) - found at https:\/\/nvd.nist.gov\/.\u00a0So, let me say that in my opinion, some of the concerns raised by Red Hat have merit and mirror some of the issues I've raised myself.\u00a0 I'm going to dig deeper into those in this post.On the other hand, the Red Hat motivation seems to be to impugn vulnerability comparisons where Red Hat might not come out on top, rather than to constructively identify the issues and propose some alternative that might work better, so I think a deeper look might be interesting, and I'll dig into that in Part 2.Microsoft Rating SystemThe first and easiest to understand is that severity rating systems have different origins and goals.\u00a0 Way back when, Microsoft issued Security Bulletins without severity ratings.\u00a0 The NVD did not yet exist, nor did Secunia.Then, in October 2001, driven by customer feedback, Microsoft began rating bulletins\u00a0with Critical, Moderate and Low severity ratings.\u00a0 Note that in a general way, these ratings were similar to the NVD rating system of today - Lows could have little impact, Critical was for serious issues, and everything in between was Moderate.By November of 2002, customer feedback had induced Microsoft to distinguish serious issues between "those that could be used to spread a worm" and "those that could not," so a fourth label was defined for serious issues that did not enable the possibility of a worm - Important.\u00a0 Since then, Microsoft has utilized the severity ratings in two ways: a Bulletin rating, which represents the maximum severity of any issue addressed in the bulletin, and vulnerability ratings, which identify the severity of individual vulnerabilities The different ratings are useful for customers that many have one product and not another, where the ratings apply differently for each product (more on this later.)\u00a0 Current definitions from Microsoft are here.Practically, Microsoft seems err to the side\u00a0of higher severity\u00a0in application of severity ratings.\u00a0 Take the recent CVE-2007-3893 address in MS07-057, for example.\u00a0 It is definitely serious - a "click to own" vulnerability.\u00a0 Technically, though, it requires a user action to trigger the attack - either browsing a web site or possibly opening and mail message.\u00a0 If it requires user action, that would make it "Important" - but it is rated "Critical."\u00a0 Red Hat RatingsRed Hat did not have severity ratings at all\u00a0for their advisories until February 2005, when they added them around the launch time of RHEL4.\u00a0 In a move that was either followed Microsoft's lead, or was amazingly coincidental,\u00a0Red Hat adopted four severity ratings Critical, Important, Moderate and Low.\u00a0 The definitions are here.\u00a0 The definitions are very similar to Microsoft, but not exactly the same.\u00a0 In a move useful mostly to\u00a0data-junkies such as myself, in September 2005, Red Hat also updated all of their pre-February advisories so that everyone could look back in time and see that Red Hat thought the CVS update on 6\/9\/2004 had Critical impact, for example.Novell SUSE RatingsIn stark contrast to Red Hat, there are the Novell SUSE severity ratings.\u00a0 Novell provides two different rating systems, of a sort: Severity 1-10:\u00a0 In SUSE Security Announcements, Novell provides a severity\u00a0rating of 1-10, with 10 being the most severe.\u00a0 Thomas Beige of the SUSE security wrote a paper called Security Vulnerability Severity Classification to explain their rating system.\u00a0 Frankly, it is pretty complicated and, if I read it correctly, should result in a range from -4 through 12 and there is some undocumented process to map those to 1 through 10.\u00a0 However, though I can't apply the 'methodology' myself, I do gather 7-10 is Critical, 4-7 is Moderate and 0-4 is Low. "Minor Issues": In SUSE Security Summary Reports, there is a paragraph that says "To avoid flooding mailing lists with SUSE Security Announcements for minor issues, SUSE Security releases weekly summary reports for the low profile vulnerability fixes." So, basically, any issue addressed in summary report is considered "minor" by SUSE.\u00a0 It isn't really clear if a "minor issue" is somehow less than a 0 or 1 in a Security Announcement.\u00a0 It is further confusing if compared with other sources.\u00a0 For example, in SUSE-SR:2005:003, two of the \u2018minor\u2019 security vulnerabilities addressed\u2014CVE-2004-1125 and CVE-2004-1267\u2014are for the CUPS (printing) component.\u00a0 Red Hat also addressed these two issues in RHSA-2005:003, but rated them as Important.\u00a0 The NVD gives both issues a 10.0 CVSS rating, the highest possible.As far as I can tell, there were no significant architectural differences between the Red Hat and SUSE implementations at the time, so ... you can draw your own conclusions.Third-Party Rating SystemsNational Vulnerability Database HML\u00a0- Originally, the NVD assigned 3 general ratings - High, Medium, Low that essentially translated to: Low - stuff nobody really gets excited about High - stuff that could enable a local or remote user to own a system Medium - stuff that isn't Low or High Personally, I always like these definitions because they are simple and easy to understand.\u00a0 Their drawback is that they don't provide much detail.National Vulnerability Database CVSS - CVSS is up to version 2 and is fairly complicated.\u00a0 You can get details here.\u00a0 I've never been a big fan of CVSS for various reasons - but to simplify it completely, I think it provides very specific differentiation where it should not necessarily exist.\u00a0 This makes it harder to leverage in risk assessment, rather than easier, and can be misleading to those not fully conversant with all the details.\u00a0 In certain situations and with certain policies, for example, a score of 5.1 might be a higher priority for a company than a score of 7.8, which is counter-intuitive to the extreme.\u00a0 Secunia - Secunia provides security ratings for their advisories as opposed to specific vulnerabilities.\u00a0 Very generally, this is comparable to the maximum bulletin rating provided by Microsoft or the RHSA advisory rating provided by Red Hat.\u00a0 These are not completely comparable to NVD ratings since they aren't necessarily specific to an individual vulnerability.\u00a0 Secunia rating definitions can be found here\u00a0and range from "Not Critical (1 of 5)" to "Extremely Critical (5 of 5)".\u00a0 ISS X-Force - ISS provides a couple of different ratings.\u00a0 Advisories now seem to provide a CVSS score along with the detailed factors of the score.\u00a0 This example provides this scoring for a combination of 3 vulnerabilities.\u00a0 If I click through to their database about one of the vulns, they use High, Medium and Low risk ratings.\u00a0 I couldn't find a definition for these and I couldn't easily find a way to find them by CVE identifier either.There are other good sources of detailed information aside from ratings.\u00a0 FOr example, www.securityfocus.com do not provide ratings at all, but provide discussion, references and potential exploit information.Issues with "One Size Fits All" Rating SystemsOne of the key problems with most severity rating systems in use today is that they assign a single rating for a given vulnerability.\u00a0 This is not a new issue, but it has become more relevant in recent years, in my opinion due to two key factors: Long support cycles for products, combined with Advances in security architecture and protection mechanisms These two bits of context combine to create a situation where the same bit of code might have a vulnerability, but that vulnerability exists in 2 very different architectural situations.Take MS03-045 for example.\u00a0 This Security Bulletin (SB) released in Oct 03 addresses CVE-2003-0659, which in the worst case allows local escalation of privilege.\u00a0 The Bulletin itself has\u00a0Maximum severity rating of Important.\u00a0 When we look this up in the NVD, it shows as a "High" severity issue, with a CVSS score of 7.2 and impact subscore of 10.0.\u00a0 An ISS entry on the vulnerability assigns it a "High Risk" rating - their highest.\u00a0 Digging into the details of the SB, however, we find some very important architecture-differentiated details: Windows 2000 - The Utility Manager on Windows 2000 runs with elevated privilege, and because of this, the vuln can be used by local users\u00a0to escalate to the privilege level.\u00a0 This is the severe case that drove Microsoft to rate the vuln on this platform as Important, which also established the maximum bulletin severity. Windows XP and 2003 - Utility Manager runs under the context of the logged-on user and does not allow for elevation of privileges.\u00a0 Rated low severity. With this level of detail, one can easily see how an IT team having Windows 2000 machines might want to react with a higher priority than one that had only Windows XP and Windows Server 2003 machines.\u00a0 If the team was leveraging only the NVD rating or only the ISS rating, the ratings would not be providing the architecture specific information they might need to do a prioritization specific to their environment.This issue applies to other vendors as well.\u00a0 Red Hat refers to this issue as well in this article, saying:We\u2019ve seen an Apache vulnerability that leads to arbitrary code execution on older FreeBSD, that causes a denial of service on Windows, but that was unexploitable on Linux for example. But this flaw had a single CVE identifier.In fact, over time this issue may be worse for open source components because the model allows for: Code forking into completely different products based upon other products.\u00a0 Consider Firefox and Iceweasel. Tweaking by different vendors - for example, Red Hat and Ubuntu might not compile and enable the same options, even on the same version of the kernel. Vendor-specific patch differences\u00a0 ... think "based on kernel 2.6 ..." Pursuit of different security architectures.\u00a0 Is PAX pre-installed?\u00a0 Is SELinux used or perhaps AppArmor?\u00a0 That could affect severity. Longer support lifecycle commitments for different releases.\u00a0 Red Hat still ships patches for Firefox 1.5, though Mozilla stopped support in May. Each of these differences mean that a single software vulnerability could have many permutations of impact and several different possible impact severities.What to Use?Ultimately, all severity rating systems are meant to be tools to help you prioritize, and even then, can't be used without a lot of other information.\u00a0 I personally think Microsoft does the best job of providing a broad set of information in their Security Bulletins.\u00a0 While all of the rating systems provide some utility, my first recommendation would be that - if your vendor provides it - you should understand the vendor rating and leverage it first.\u00a0 In the best cases (ie, not Novell SUSE), the vendor will provide you an extra level of detail and provide different ratings for different products and\u00a0individual vulnerabilities\u00a0if appropriate.\u00a0 Additionally, vendors can tailor their rating systems based upon customer feedback, as Microsoft has done over the years, whereas third-party rating systems must focus on being generic across products.So, am I saying there is no need for\u00a0third-party severity ratings?\u00a0 Not necessarily, I am just saying the different rating systems have different utility depending on your goals.\u00a0 Stay tuned for part 2, where I will discuss some situations where third-party ratings have more utility than vendor ratings, especially when making cross-vendor comparisons.