On Vendor and Third-Party Severity Rating Systems - Part 1

Recently, Red Hat has raised some objections to my use in analysis of the High, Medium and Low severity ratings as determined by the National Institute of Standards (NIST) for the National Vulnerability Database (NVD) - found at http://nvd.nist.gov/

So, let me say that in my opinion, some of the concerns raised by Red Hat have merit and mirror some of the issues I've raised myself.  I'm going to dig deeper into those in this post.

On the other hand, the Red Hat motivation seems to be to impugn vulnerability comparisons where Red Hat might not come out on top, rather than to constructively identify the issues and propose some alternative that might work better, so I think a deeper look might be interesting, and I'll dig into that in Part 2.

Microsoft Rating System

The first and easiest to understand is that severity rating systems have different origins and goals.  Way back when, Microsoft issued Security Bulletins without severity ratings.  The NVD did not yet exist, nor did Secunia.

Then, in October 2001, driven by customer feedback, Microsoft began rating bulletins with Critical, Moderate and Low severity ratings.  Note that in a general way, these ratings were similar to the NVD rating system of today - Lows could have little impact, Critical was for serious issues, and everything in between was Moderate.

By November of 2002, customer feedback had induced Microsoft to distinguish serious issues between "those that could be used to spread a worm" and "those that could not," so a fourth label was defined for serious issues that did not enable the possibility of a worm - Important.  Since then, Microsoft has utilized the severity ratings in two ways:

  • a Bulletin rating, which represents the maximum severity of any issue addressed in the bulletin, and
  • vulnerability ratings, which identify the severity of individual vulnerabilities

The different ratings are useful for customers that many have one product and not another, where the ratings apply differently for each product (more on this later.)  Current definitions from Microsoft are here.

Practically, Microsoft seems err to the side of higher severity in application of severity ratings.  Take the recent CVE-2007-3893 address in MS07-057, for example.  It is definitely serious - a "click to own" vulnerability.  Technically, though, it requires a user action to trigger the attack - either browsing a web site or possibly opening and mail message.  If it requires user action, that would make it "Important" - but it is rated "Critical." 

Red Hat Ratings

Red Hat did not have severity ratings at all for their advisories until February 2005, when they added them around the launch time of RHEL4.  In a move that was either followed Microsoft's lead, or was amazingly coincidental, Red Hat adopted four severity ratings Critical, Important, Moderate and Low.  The definitions are here.  The definitions are very similar to Microsoft, but not exactly the same. 

In a move useful mostly to data-junkies such as myself, in September 2005, Red Hat also updated all of their pre-February advisories so that everyone could look back in time and see that Red Hat thought the CVS update on 6/9/2004 had Critical impact, for example.

Novell SUSE Ratings

In stark contrast to Red Hat, there are the Novell SUSE severity ratings.  Novell provides two different rating systems, of a sort:

  • Severity 1-10:  In SUSE Security Announcements, Novell provides a severity rating of 1-10, with 10 being the most severe.  Thomas Beige of the SUSE security wrote a paper called Security Vulnerability Severity Classification to explain their rating system.  Frankly, it is pretty complicated and, if I read it correctly, should result in a range from -4 through 12 and there is some undocumented process to map those to 1 through 10.  However, though I can't apply the 'methodology' myself, I do gather 7-10 is Critical, 4-7 is Moderate and 0-4 is Low.
  • "Minor Issues": In SUSE Security Summary Reports, there is a paragraph that says "To avoid flooding mailing lists with SUSE Security Announcements for minor issues, SUSE Security releases weekly summary reports for the low profile vulnerability fixes." So, basically, any issue addressed in summary report is considered "minor" by SUSE. 

It isn't really clear if a "minor issue" is somehow less than a 0 or 1 in a Security Announcement.  It is further confusing if compared with other sources.  For example, in SUSE-SR:2005:003, two of the ‘minor’ security vulnerabilities addressed—CVE-2004-1125 and CVE-2004-1267—are for the CUPS (printing) component.  Red Hat also addressed these two issues in RHSA-2005:003, but rated them as Important.  The NVD gives both issues a 10.0 CVSS rating, the highest possible.

As far as I can tell, there were no significant architectural differences between the Red Hat and SUSE implementations at the time, so ... you can draw your own conclusions.

Third-Party Rating Systems

National Vulnerability Database HML - Originally, the NVD assigned 3 general ratings - High, Medium, Low that essentially translated to:

  • Low - stuff nobody really gets excited about
  • High - stuff that could enable a local or remote user to own a system
  • Medium - stuff that isn't Low or High

Personally, I always like these definitions because they are simple and easy to understand.  Their drawback is that they don't provide much detail.

National Vulnerability Database CVSS - CVSS is up to version 2 and is fairly complicated.  You can get details here.  I've never been a big fan of CVSS for various reasons - but to simplify it completely, I think it provides very specific differentiation where it should not necessarily exist.  This makes it harder to leverage in risk assessment, rather than easier, and can be misleading to those not fully conversant with all the details.  In certain situations and with certain policies, for example, a score of 5.1 might be a higher priority for a company than a score of 7.8, which is counter-intuitive to the extreme. 

Secunia - Secunia provides security ratings for their advisories as opposed to specific vulnerabilities.  Very generally, this is comparable to the maximum bulletin rating provided by Microsoft or the RHSA advisory rating provided by Red Hat.  These are not completely comparable to NVD ratings since they aren't necessarily specific to an individual vulnerability.  Secunia rating definitions can be found here and range from "Not Critical (1 of 5)" to "Extremely Critical (5 of 5)". 

ISS X-Force - ISS provides a couple of different ratings.  Advisories now seem to provide a CVSS score along with the detailed factors of the score.  This example provides this scoring for a combination of 3 vulnerabilities.  If I click through to their database about one of the vulns, they use High, Medium and Low risk ratings.  I couldn't find a definition for these and I couldn't easily find a way to find them by CVE identifier either.

There are other good sources of detailed information aside from ratings.  FOr example, www.securityfocus.com do not provide ratings at all, but provide discussion, references and potential exploit information.

Issues with "One Size Fits All" Rating Systems

One of the key problems with most severity rating systems in use today is that they assign a single rating for a given vulnerability.  This is not a new issue, but it has become more relevant in recent years, in my opinion due to two key factors:

  • Long support cycles for products, combined with
  • Advances in security architecture and protection mechanisms

These two bits of context combine to create a situation where the same bit of code might have a vulnerability, but that vulnerability exists in 2 very different architectural situations.

Take MS03-045 for example.  This Security Bulletin (SB) released in Oct 03 addresses CVE-2003-0659, which in the worst case allows local escalation of privilege.  The Bulletin itself has Maximum severity rating of Important.  When we look this up in the NVD, it shows as a "High" severity issue, with a CVSS score of 7.2 and impact subscore of 10.0.  An ISS entry on the vulnerability assigns it a "High Risk" rating - their highest. 

Digging into the details of the SB, however, we find some very important architecture-differentiated details:

  • Windows 2000 - The Utility Manager on Windows 2000 runs with elevated privilege, and because of this, the vuln can be used by local users to escalate to the privilege level.  This is the severe case that drove Microsoft to rate the vuln on this platform as Important, which also established the maximum bulletin severity.
  • Windows XP and 2003 - Utility Manager runs under the context of the logged-on user and does not allow for elevation of privileges.  Rated low severity.

With this level of detail, one can easily see how an IT team having Windows 2000 machines might want to react with a higher priority than one that had only Windows XP and Windows Server 2003 machines.  If the team was leveraging only the NVD rating or only the ISS rating, the ratings would not be providing the architecture specific information they might need to do a prioritization specific to their environment.

This issue applies to other vendors as well.  Red Hat refers to this issue as well in this article, saying:

We’ve seen an Apache vulnerability that leads to arbitrary code execution on older FreeBSD, that causes a denial of service on Windows, but that was unexploitable on Linux for example. But this flaw had a single CVE identifier.

In fact, over time this issue may be worse for open source components because the model allows for:

  • Code forking into completely different products based upon other products.  Consider Firefox and Iceweasel.
  • Tweaking by different vendors - for example, Red Hat and Ubuntu might not compile and enable the same options, even on the same version of the kernel.
  • Vendor-specific patch differences  ... think "based on kernel 2.6 ..."
  • Pursuit of different security architectures.  Is PAX pre-installed?  Is SELinux used or perhaps AppArmor?  That could affect severity.
  • Longer support lifecycle commitments for different releases.  Red Hat still ships patches for Firefox 1.5, though Mozilla stopped support in May.

Each of these differences mean that a single software vulnerability could have many permutations of impact and several different possible impact severities.

What to Use?

Ultimately, all severity rating systems are meant to be tools to help you prioritize, and even then, can't be used without a lot of other information.  I personally think Microsoft does the best job of providing a broad set of information in their Security Bulletins.  While all of the rating systems provide some utility, my first recommendation would be that - if your vendor provides it - you should understand the vendor rating and leverage it first.  In the best cases (ie, not Novell SUSE), the vendor will provide you an extra level of detail and provide different ratings for different products and individual vulnerabilities if appropriate. 

Additionally, vendors can tailor their rating systems based upon customer feedback, as Microsoft has done over the years, whereas third-party rating systems must focus on being generic across products.

So, am I saying there is no need for third-party severity ratings?  Not necessarily, I am just saying the different rating systems have different utility depending on your goals.  Stay tuned for part 2, where I will discuss some situations where third-party ratings have more utility than vendor ratings, especially when making cross-vendor comparisons.

Copyright © 2007 IDG Communications, Inc.

The 10 most powerful cybersecurity companies