NetIQ's Garve Hays talks about disclosure, bounty programs, and vulnerability marketing with CSO, in the first of a series of topical discussions with industry leaders and experts.
Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focus on disclosure and how pending regulation could impact it. In addition, we asked about marketed vulnerabilities such as Heartbleed and bounty programs, do they make sense?
CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A, or feel free to suggest topics for future consideration.
Where do you stand: Full Disclosure, Responsible Disclosure, or somewhere in the middle?
Garve Hays (GH), Solution Architect, NetIQ: I stand between full and responsible disclosure. I'm of the opinion that sometimes organizations need a "helping hand" to prioritize security over features.
If a researcher chooses to follow responsible / coordinated disclosure and the vendor goes silent -- or CERT stops responding to them -- is Full Disclosure proper at this point? If not, why not?
GH: With no communication at all or "stone-walling," full disclosure is a proper course of action. As we saw with labor reform following the industrial revolution, we should likewise expect software companies to advance their responsibility in the App Economy.
It is unacceptable to ignore security vulnerabilities, particularly in the face of evidence showing them to be present. The counterpoint is that all parties should reach a consensus that a potential exploit is practical and not merely theoretical. But this will not happen in the absence of discourse.
Bug Bounty programs are becoming more common, but sometimes the reward being offered is far less than the perceived value of the bug / exploit. What do you think can be done to make it worth the researcher's time and effort to work with a vendor directly?
GH: Bug Bounty programs are becoming more common, but sometimes the reward being offered is far less than the perceived value of the bug / exploit. What do you think can be done to make it worth the researcher's time and effort to work with a vendor directly?
I think the expansion of programs like the Linux Foundation's Core Infrastructure Initiative to include bug bounty programs would supplement the ongoing development of open source projects in the critical path of core computing. Regarding companies with limited resources, I am encouraged by "crowd-sourced" initiatives such as Bugcrowd.
[Note: Alternatively, organizations and turn to HackerOne for bounty programs as well.]
Do you think vulnerability disclosures with a clear marketing campaign and PR process, such as Heartbleed, POODLE, or Shellshock, have value?
GH: Definitely. Public awareness is a key component in the ongoing effort to protect our data and advance the state-of-the-art. Not everyone follows BugTraq, so the more accessible the information, the better the chance of public scrutiny.
If the proposed changes pass, how do you think Wassenaar will impact the disclosure process? Will it kill full disclosure with proof-of-concept code, or move researchers away from the public entirely preventing serious issues from seeing the light of day? Or, perhaps, could it see a boom in responsible disclosure out of fear of being on the wrong side of the law?
GH: I'm not sure one can address the Wassenaar Arrangement in isolation without the context of the Computer Fraud and Abuse Act (CFAA), but I'll give it a shot.
So although the Wassenaar Arrangement (WA) applies to the export and sale of "controlled munitions," which in this case now includes so called cyber-weapons, it will impact the disclosure process by adding overhead. Whereas the procedures were formerly mostly cooperative, there is now an imposed layer of bureaucracy.
Regulations are nothing new in the computer security industry, but the cost of an error here may now be applied to individual security researchers as well as a company. A million dollar fine may be tolerable for a large company, but such a fine coupled with a 20-year jail sentence for an individual is sobering.
The community of security researchers is a global one and as such necessitates international transfer of ideas and working code. This can no longer be an informal process. Researchers must plan ahead and release their code prior to traveling to conferences, for example.
I think most participants will adapt and continue to disclose their findings; it just may take longer than before, which is unfortunate due to "Internet" speed. Which is to say that malicious actors and criminals will have no such restraint.