Bromium's Rahul Kashyap talks about disclosure, bounty programs, and vulnerability marketing with CSO, in the first of a series of topical discussions with industry leaders and experts.
Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focus on disclosure and how pending regulation could impact it. In addition, we asked about marketed vulnerabilities such as Heartbleed and bounty programs, do they make sense?
CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A, or feel free to suggest topics for future consideration.
Where do you stand: Full Disclosure, Responsible Disclosure, or somewhere in the middle?
Rahul Kashyap, Chief Security Architect, Bromium (RK): Somewhere in the middle. Responsible Disclosure should be the first choice – always, without any doubt. We need to look at the issue objectively, from both sides -- ultimately, the aim of vulnerability disclosure is to produce better, more secure code.
In rare cases, the bug discovered might genuinely require major changes and testing efforts from the vendor. In this case, the vendor should respond accordingly and request more time, and the vulnerability discoverer should certainly consider providing more time.
However, on the other side, it should also be understood from the researcher’s perspective. Timing is important as there’s always the chance someone else discovered the same issue around the same time (it happens often), or active exploitation is ongoing in the wild so the issue might surface anytime in the public.
If a researcher uncovers a critical bug, it's likely to be top of mind and its likely to hear that researcher insist on a quicker resolution.
Clear communication and mutual respect for each others point of view is important throughout the process.
If a researcher chooses to follow responsible / coordinated disclosure and the vendor goes silent -- or CERT stops responding to them -- is Full Disclosure proper at this point? If not, why not?
(RK): Yes, with notification. If the researcher follows responsible disclosure and the vendor goes silent to avoid fixing the issue, then Full Disclosure can be considered. Vulnerability disclosures should not be taken for granted; it takes a lot of skill and time for researchers to uncover security vulnerabilities.
If a researcher has exhausted all avenues, full disclosure is the way out. The security community will decide the severity of the issue and the vendor will (hopefully) turnaround quickly under pressure -- but customers or users can be exposed in that time. It is ultimately in the interest of end users that vendors work well with security researchers.
If a well-established entity like CERT treats the issue lightly, there could be a chance the issue is not severe or exploitability is not clearly articulated. This should be investigated. There have been cases in the past when fallout of ‘negotiations’ with the severity of the issue resulted in full disclosures.
Bug Bounty programs are becoming more common, but sometimes the reward being offered is far less than the perceived value of the bug / exploit. What do you think can be done to make it worth the researcher's time and effort to work with a vendor directly?
(RK): In the past, vulnerability researchers did not have direct access to vendors easily, nor was any real incentive provided for finding vulnerabilities. Nowadays, bug bounty programs provide the bridge and incentive. Yes, in many cases the value of the vulnerability might not be ‘fair’ compared to what might be available in the underground market. Those numbers might never converge given the dynamics at play.
Going through well known bounty programs means less ethical concerns compared to selling 0days in the underground market, and it might be a better long term plan. There are new companies like HackerOne that pay a substantial amount to bug finders. Researchers can try to contact vendors directly, but it might not be easy to get incentive, as unfortunately the value of the exploit or bug might not be understood by most vendors.
Awareness is key, and the security community should spend more time informing talented vulnerability researchers about viable options to get rewarded for their research and hard work.
Do you think vulnerability disclosures with a clear marketing campaign and PR process, such as Heartbleed, POODLE, or Shellshock, have value?
(RK): The security industry has a lot going on, all the time, and marketing campaigns with fancy names are a marketing means to get above the noise. After all, "Shellshock" sounds much cooler than YACI (Yet Another CVE Identifier).
Ultimately, what matters is that the vulnerability gets adequate attention when needed. There have been cases where vulnerabilities with cool names have been over-hyped; this isn't a good trend as security needs credibility from the industry to fix critical problems, and well-budgeted, over-hyped campaigns don’t help long-term. It’s okay to spend marketing dollars to increase awareness -- as long as the issue is credible.
If the proposed changes pass, how do you think Wassenaar will impact the disclosure process? Will it kill full disclosure with proof-of-concept code, or move researchers away from the public entirely preventing serious issues from seeing the light of day? Or, perhaps, could it see a boom in responsible disclosure out of fear of being on the wrong side of the law?
(RK): Laws like Wassenaar end up with researchers talking to lawyers which can’t be a good thing! The issue with such laws is they’re too broad and create grey areas left for researchers to figure out.
I doubt Full Disclosure is at risk due to this. Full Disclosure is a spirit, it’s an attitude -- you cannot kill that easily with laws and layers of documentation.