Microsoft blasts Google for vulnerability disclosure policy

Expert says coordinated disclosure is a form of censorship

1 2 Page 2
Page 2 of 2

Microsoft is defining coordinated disclosure from the perspective that customers are best protected when a patch is available, commented Ross Barrett, senior manager of security engineering at Rapid7, when asked his opinion.

"This is a reasonable and defensible stance when aligned with the premise that the vendor, in this case Microsoft, is actually working towards a patch for the given issue, and has a short term timeline for delivering that patch. However, with Microsoft that is not always a reasonable or true assumption."

Yet, Microsoft has failed to address issues disclosed to them many times over the years, because they deem fixing the issue to be low priority or not cost effective. In this case, CVD is favoring the attacker who can independently discover the flaw and begin to exploit it, Barrett explained.

"There is an equally reasonable argument that the public has a right to know about flaws in the systems they may use, so that if they so choose, they can make informed decisions. Coordinated disclosure is a form of censorship. On the surface it seems like a reasonable principle 'withhold information until everyone is ready for it to be public,' but in practice it becomes a shield behind which vendors obfuscate serious design flaws and delay security fixes that are not 'cost effective' at the cost of increased risk to their users."

Want my take on this issue?

Google's Project Zero discloses vulnerabilities algorithmically; an automatic process that starts the moment the issue is reported privately, and the vendor has 90-days to take action. A 90-day policy is a huge improvement, arguably a middle ground, compared to full disclosure, where the issue is released to the public immediately.

Researchers have struggled with this topic for a long time. I know of some who are afraid to disclose publicly due to peer pressure and reputation issues. As such, lists like Full Disclosure – where zero-day discoveries used to be the norm – are now full of patch notices that defeat the list's name entirely.

By releasing the details automatically, Google is keeping the field even and enforcing a single rule for everyone. While I personally feel that no notice should be given to the vendor (they should learn about the issue the same day all of us do), I also think that researchers should be free to do what they want with their work and discoveries.

If they want to publish immediately, they should. If they want to wait 90-days, that's fine too. After all, the researcher did the work. They took the time to perform a free code audit and additional QA. So they should have the final say when it comes to disclosure.

As for releasing PoC examples, every disclosure should come with them, especially if they are a serious vulnerability. How else can the good guys test the issue in-house? Yes, that means criminals get the same advantage, but it's better than the trench workers lagging behind the criminals who can develop exploit code on their own.

Security is hard enough without fighting over the work of others. When I think of the roots of full disclosure, I remember how vendors hated it. Over time, the industry progressed to bounty programs, but the researchers are only paid if they do not disclose the discovered problems to the public.

Where has that restriction taken us as an industry?

Right now, ZDI reports that more than 200 disclosures are without fixes, and many of them are more than a year-old. How does that help the public, when a vendor can leverage the terms of ZDI to keep their flaws hidden and go about business as usual?

I'll close my mini-rant with part of an essay from Bruce Schneier:

"Public scrutiny is how security improves, whether we're talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us—unless, of course, they knew about it beforehand—but most of the time the benefits far outweigh the disadvantages.

"Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security and inhibits security education that leads to improvements. Secrecy doesn't improve security, it stifles it.

"I'd rather have as much information as I can to make an informed decision about security, whether it's a buying decision about a software product or an election decision about two political parties. I'd rather have the information I need to pressure vendors to improve security."

1 2 Page 2
Page 2 of 2
Get the best of CSO ... delivered. Sign up for our FREE email newsletters!