After Google disclosed a second Microsoft vulnerability, complete with proof-of-concept code, the software giant accused them of playing 'gotcha' in a blog post heavy on criticism for Google's 90-day reporting policy.But really, this is just another way for them to debate disclosure policy.The latest shot fired in the war on disclosure happened on Sunday, January 11. Google's Project Zero, for the second time in less than a month, disclosed an unpatched privilege escalation flaw in Windows 8.1. The disclosure came 90-days after it was initially reported,\u00a0on October 13, 2014, following Google's policy for such things.Microsoft had initially asked Google to hold off on publishing details, because they planned to fix the problem in February's patch release."Microsoft were informed that the 90 day deadline is fixed for all vendors and bug classes and so cannot be extended. Further they were informed that the 90 day deadline for this issue expires on the 11th Jan 2015," Google's notes explain.In response, Microsoft said a patch for this latest bug would be delivered January 13. However, the entire process has left Microsoft feeling a bit salty. In a blog post, Chris Betz, senior director with Microsoft\u2019s Security Response Center, said that they tried to work with Google, but the search giant wouldn't budge."Specifically, we asked Google to work with us to protect customers by withholding details until Tuesday, January 13, when we will be releasing a fix," he wrote."Although following through keeps to Google's announced timeline for disclosure, the decision feels less like principles and more like a 'gotcha,' with customers the ones who may suffer as a result. What\u2019s right for Google is not always right for customers. We urge Google to make protection of customers our collective primary goal."The post goes on to encourage researchers to use Coordinated Vulnerability Disclosure (CVD), which Microsoft says works better than full disclosure.From the blog post:"CVD philosophy and action is playing out today as one company - Google - has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so..."Microsoft has long believed coordinated disclosure is the right approach and minimizes risk to customers. We believe those who fully disclose a vulnerability before a fix is broadly available are doing a disservice to millions of people and the systems they depend upon..."Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a \u201cfix\u201d has been provided to customers, and even after a \u201cfix\u201d is made publicly available only a very small amount are ever exploited. Conversely, the track record of vulnerabilities publicly disclosed before fixes are available for affected products is far worse, with cybercriminals more frequently orchestrating attacks against those who have not or cannot protect themselves."Microsoft's thoughts on disclosure have been clear for years. They've championed the responsible (or coordinated) disclosure cause since the early 2000s, when the company was the favorite platform for security researchers. But others don't agree \u2013 and that's why the issue is so hotly debated.\tMicrosoft is defining coordinated disclosure from the perspective that customers are best protected when a patch is available, commented Ross Barrett, senior manager of security engineering at Rapid7, when asked his opinion."This is a reasonable and defensible stance when aligned with the premise that the vendor, in this case Microsoft, is actually working towards a patch for the given issue, and has a short term timeline for delivering that patch. However, with Microsoft that is not always a reasonable or true assumption."Yet, Microsoft has failed to address issues disclosed to them many times over the years, because they deem fixing the issue to be low priority or not cost effective. In this case, CVD is favoring the attacker who can independently discover the flaw and begin to exploit it, Barrett explained."There is an equally reasonable argument that the public has a right to know about flaws in the systems they may use, so that if they so choose, they can make informed decisions. Coordinated disclosure is a form of censorship. On the surface it seems like a reasonable principle 'withhold information until everyone is ready for it to be public,' but in practice it becomes a shield behind which vendors obfuscate serious design flaws and delay security fixes that are not 'cost effective' at the cost of increased risk to their users."Want my take on this issue?Google's Project Zero discloses vulnerabilities algorithmically; an automatic process that starts the moment the issue is reported privately, and the vendor has 90-days to take action. A 90-day policy is a huge improvement, arguably a middle ground, compared to full disclosure, where the issue is released to the public immediately.Researchers have struggled with this topic for a long time. I know of some who are afraid to disclose publicly due to peer pressure and reputation issues. As such, lists like Full Disclosure \u2013 where zero-day discoveries used to be the norm \u2013 are now full of patch notices that defeat the list's name entirely.By releasing the details automatically, Google is keeping the field even and enforcing a single rule for everyone. While I personally feel that no notice should be given to the vendor (they should learn about the issue the same day all of us do), I also think that researchers should be free to do what they want with their work and discoveries.If they want to publish immediately, they should. If they want to wait 90-days, that's fine too. After all, the researcher did the work. They took the time to perform a free code audit and additional QA. So they should have the final say when it comes to disclosure.As for releasing PoC examples, every disclosure should come with them, especially if they are a serious vulnerability. How else can the good guys test the issue in-house? Yes, that means criminals get the same advantage, but it's better than the trench workers lagging behind the criminals who can develop exploit code on their own.Security is hard enough without fighting over the work of others. When I think of the roots of full disclosure, I remember how vendors hated it. Over time, the industry progressed to bounty programs, but the researchers are only paid if they do not disclose the discovered problems to the public.Where has that restriction taken us as an industry?Right now, ZDI reports that more than 200 disclosures are without fixes, and many of them are more than a year-old. How does that help the public, when a vendor can leverage the terms of ZDI to keep their flaws hidden and go about business as usual?I'll close my mini-rant with part of an essay from Bruce Schneier:"Public scrutiny is how security improves, whether we're talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us\u2014unless, of course, they knew about it beforehand\u2014but most of the time the benefits far outweigh the disadvantages."Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security and inhibits security education that leads to improvements. Secrecy doesn't improve security, it stifles it."I'd rather have as much information as I can to make an informed decision about security, whether it's a buying decision about a software product or an election decision about two political parties. I'd rather have the information I need to pressure vendors to improve security."