• United States



gregg keizer
Senior Reporter

Microsoft Correctly Predicts Reliable Exploits Just 27% of the Time

Nov 03, 20095 mins
Data and Information SecurityEnterprise ApplicationsMicrosoft

Microsoft's monthly predictions about whether hackers will create reliable exploit code for its bugs were right only about a quarter of the time in the first half of 2009, the company acknowledged Monday.

Microsoft’s monthly predictions about whether hackers will create reliable exploit code for its bugs were right only about a quarter of the time in the first half of 2009, the company acknowledged Monday.

“That’s not as good as a coin toss,” said Andrew Storms, director of security operations at nCircle Network Security. “So what’s the point?”

In October 2008, Microsoft added an “Exploitability Index” to the security bulletins it issues each month. The index rates bugs on a scale from 1 to 3, with 1 indicating that consistently-successful exploit code was likely in the next 30 days, and 3 meaning that working exploit code was unlikely during that same period.

The idea was to give customers more information to decide which vulnerabilities should be patched first. Before the introduction of the index, Microsoft only offered impact ratings — “critical,” “important,” “moderate” and “low” — as an aid for users puzzled by which flaws should be fixed immediately and which could be set aside for the moment.

But in the first half of this year, Microsoft correctly predicted exploits just slightly more than one out of every four times.

“Forty-one vulnerabilities were assigned an Exploitability Index rating of 1, meaning that they were considered the most likely to be exploited within 30 days of the associated security bulletin’s release,” Microsoft stated in its bi-annual security intelligence report , which it published Monday. “Of these, 11 were, in fact, exploited within 30 days.”

That means Microsoft got it right about 27% of the time.

Microsoft also tallied its predictions by security bulletins — in many cases a single bulletin included patches for multiple vulnerabilities — to come up with a better batting average. “Sixteen bulletins received a severity rating of Critical,” it said in its report. “Of these, 11 were assigned an Exploitability Index rating of 1. Five of these 11 bulletins addressed vulnerabilities that were publicly exploited within 30 days, for an aggregate false positive rate of 55%.”

The company defended its poor showing — even on a bulletin-by-bulletin level it accurately predicted exploitability only 45% of the time — by saying it was playing it safe. “The higher false positive rate for Critical security bulletins can be attributed to the conservative approach used during the assessment process to ensure the highest degree of customer protection for the most severe class of issues,” said Microsoft.

“There’s some validity to that,” agreed Storms. “They’re going to err on the side of caution, if only to prevent people saying ‘I told you so’ if an exploit appears later.”

John Pescatore, Gartner’s primary security analyst, agreed, but added, “If they want to stick with the index, they need to adjust the criteria so fewer vulnerabilities get a ‘1.’”

With vulnerability-by-vulnerability predictions correct only a fourth of the time, Storms questioned the usefulness of the exploitability index. “What’s the point of the index if they’re always going to side on the more risky side, as opposed to what’s most likely?” he asked. “In some ways, we’re back to where we were before they introduced the exploitability index.”

From Storms’ point of view, the exploitability index was meant to provide more granular information to customers who wondered what should be patched first. Presumably, a vulnerability marked critical with an index rating of “1” would take precedence over a critical vulnerability tagged as “2” or “3” on the exploitability index.

“With these numbers of false positives, we are in no better place than we were prior to the index, in respect to granularity,” he said.

Pescatore also questioned the usefulness of the exploitability index. “I doubt anyone even looks at it,” he said.

Instead, Pescatore again argued, as he did last year when Microsoft debuted the index, that the company would better serve customers by abandoning its own severity and exploitability rankings, and move to the standard CVSS [Common Vulnerability Scoring System] ratings. The CVSS system is used by, among other companies and organizations, Oracle, Cisco and US-CERT.

“Because Microsoft does its own exploitability index, enterprises can’t compare theirs with Adobe’s or Oracle’s. It’s an apples and oranges thing then,” said Pescatore. “It’s not just Windows bugs that companies have to deal with anymore.”

He doubted Microsoft would take his advice. “They don’t want to do that because then reporters and analysts can look and say, ‘Microsoft has more higher-rated vulnerabilities than Oracle or Adobe,'” he said. “There’s nothing in it for them to do that.”

Microsoft made the right call on all 46 vulnerabilities that were assigned an exploitability rating of “2” or “3,” which indicate that an exploit would be unreliable or unlikely, respectively. “None were identified to have been publicly exploited within 30 days,” Microsoft’s report noted.

If all its predictions in the first half of 2009 are considered, not just those marked as likely to be exploited, Microsoft got 57 out of a possible 87, or 66% of them, right.

Microsoft’s security intelligence report, which covers the January-June 2009 period, was the first to spell out the accuracy of the exploitability index. But Microsoft has touted its forecasting before. A year ago, for example, Microsoft said in a postmortem of its first-ever index that although it had accurately predicted exploits less than half the time, it considered the tool a success . “I think we did really well,” said Mike Reavey, group manager at the Microsoft Security Research Center (MSRC), at the time.

Microsoft’s security intelligence report can be downloaded from its Web site in PDF or XPS document formats.