• United States



Senior Staff Writer

Hacked Opinions: Vulnerability disclosure – Geoff Sanders

Jun 29, 20154 mins
IT LeadershipTechnology IndustryVulnerabilities

LaunchKey's Geoff Sanders talks about disclosure, bounty programs, and vulnerability marketing

hacked opinion small Thinkstock

LaunchKey’s Geoff Sanders talks about disclosure, bounty programs, and vulnerability marketing with CSO, in the first of a series of topical discussions with industry leaders and experts.

Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focus on disclosure and how pending regulation could impact it. In addition, we asked about marketed vulnerabilities such as Heartbleed and bounty programs, do they make sense?

CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A, or feel free to suggest topics for future consideration.

Where do you stand: Full Disclosure, Responsible Disclosure, or somewhere in the middle?

Geoff Sanders (GS), Co-Founder and CEO of LaunchKey: I believe a responsible disclosure policy which allows for full disclosure of a vulnerability, following a brief period to allow patches to be deployed and affected users to be notified, strikes the right balance of the need to fix the problem, inform the public of the risks, and reduce further exploitation, so long as policy owners address disclosures promptly and with urgency.

If a researcher chooses to follow responsible / coordinated disclosure and the vendor goes silent — or CERT stops responding to them — is Full Disclosure proper at this point? If not, why not?

GS: Absolutely. The ‘responsible’ part of responsible disclosure applies as much to the vendor as it does to the researcher. Responsible vendors are prompt and maintain an open dialogue with researchers. If a vendor or CERT goes silent, a researcher has no choice but to assume the vendor isn’t addressing the issue at which point it becomes the ethical responsibility of the researcher to fully disclose the vulnerability.

Bug Bounty programs are becoming more common, but sometimes the reward being offered is far less than the perceived value of the bug / exploit. What do you think can be done to make it worth the researcher’s time and effort to work with a vendor directly?

GS: I think most security researchers simply want a financial bounty that respects the amount of work they’ve put into finding the bug, and one that’s appropriate for the size of the vendor paying the bounty. Large vendors should offer rewards that at minimum reflect the cost of contracting similar professional services in the market, while researchers should respect that startups and small vendors will have proportionately less capital to reward.

Do you think vulnerability disclosures with a clear marketing campaign and PR process, such as Heartbleed, POODLE, or Shellshock, have value?

GS: I don’t think every vulnerability needs a cool name and logo, but I think it can definitely help with the more significant vulnerabilities that demand greater attention from the public. Being that the general public isn’t a technical audience, it makes sense to market these bugs in friendly and memorable terms for the same reason we refer to Rhinopharyngitis as the common cold.

If the proposed changes pass, how do you think Wassenaar will impact the disclosure process? Will it kill full disclosure with proof-of-concept code, or move researchers away from the public entirely preventing serious issues from seeing the light of day? Or, perhaps, could it see a boom in responsible disclosure out of fear of being on the wrong side of the law?

GS: Security researchers rely on the ability to recreate vulnerabilities to both discover bugs and build defenses against them. The proposed changes to the Wassenaar Agreement add vague language which opens up researchers to the potential for prosecution. Such an approach will only serve to dissuade participation from the security research community and limit collaboration between researchers which is paramount to finding and fixing critical vulnerabilities in a timely manner. At the end of the day, sharing and disseminating vulnerabilities and malware will still be a trivial endeavor, and the bad actors this approach is supposed to impair will merely find vulnerabilities that remain unpatched for longer periods of times that allow greater exploitation.