Has responsible disclosure won the debate?

The debate in the security community about disclosure shows no signs of abating. This article explores both sides of the argument and puts forward suggestions for organizations looking to improve their transparency and responsiveness towards externally discovered vulnerabilities.

misunderstood contracts disagreement argue blame
Thinkstock

The disclosure dilemma is a common one faced by white-hat security researchers when it comes to reporting vulnerabilities discovered in commercial software products, including web and mobile applications, belonging to other organizations.

In conversations about vulnerability management and zero-day vulnerabilities, the method of discovery is often secondary to the timeliness and subsequent remediation of the discovery.

This would imply that organizations would prefer to be quickly informed of vulnerabilities discovered in the “wild” rather than remain blind until a threat actor successfully exploits them, resulting in a security breach.

However, this desire by organizations to know about their vulnerabilities is not always matched by a willingness to act on the information. This is one of the issues at the heart of the disclosure debate.

Responsible disclosure versus full disclosure

Browsing through security forums, it is clear that the debate about responsible versus full disclosure has the security community split right down the middle.

Responsible disclosure (aka “ethical” disclosure) is the process where, upon discovering a vulnerability in commercial IT products or online services, the researcher alerts the affected company or vendor organization.

The expectation of the researcher is that the recipient will investigate and validate the reported findings, develop security updates and release patches in a timely manner – usually before an agreed deadline.

Full disclosure on the other hand happens when identified vulnerabilities are immediately made public upon discovery, e.g., through mailing lists or social media. While the intent of the researcher is usually to spur action, this approach can place the affected organization at a disadvantage in the race against time to fix publicized flaws.

In the latter approach, the affected organization is under greater pressure to release a fix which may address the immediate problem. However, the resulting pressure may not give them sufficient time to fully consider dependencies in code or the full impact of the reported flaw on other IT infrastructure.

Meltdown and Spectre: what was the right call?

In a popular recent example of responsible disclosure, security researchers, including specialists from Google, discovered vulnerabilities in central processing units (CPU) manufactured by several leading vendors which made units vulnerable to a highly technical exploit.

The vulnerabilities, nicknamed Meltdown and Spectre, simply put, allowed unauthorised information disclosure from an operating system’s usually secure kernel memory. The findings highlighted weaknesses in features which allow processors to handle “speculative execution,” a process designed to improve the performance of modern CPUs.

Arguably, by following a responsible disclosure process, the researchers gave the affected manufacturers reasonable head start for developing and releasing patches before the findings were made public.

Although no known active exploitation was identified at the time of disclosure, the counter argument pointed out that the scale of the Meltdown and Spectre problem was justification enough for earlier disclosure and that by withholding information, the coalition of researchers and manufacturers increased the risk to the public.

Public consternation arising from the coalition suppressing the information for close to six months must invariably be weighed against the scale of the vulnerabilities – both of which impact thousands of organizations and technologies across the globe.

To disclose or not to disclose responsibly

Full disclosure proponents like industry thought leader Bruce Schneier argue that this approach forces a more urgent response from the affected company who might otherwise not do anything about it.

While this is often true, and may result in action, the resulting security fix, as was the case for Meltdown and Spectre, might be of questionable quality and the rollout haphazard enough to place end users at even greater risk.

The case for full disclosure is supported by examples such as this one discussed by security researcher Troy Hunt where some organisations make it difficult – through questionable tactics – for the security community to responsibly disclose identified vulnerabilities.

The fear of legal action or frustration from inactivity also leads some in the security community to resort to full disclosure to get the attention of nonchalant organizations.

Setting aside ethical disclosure is also easier when a thriving black market beckons. For this reason, some “researchers” prefer to place their discoveries in the public domain in bid to attract the highest bidder.

Taking a stand for responsible disclosure

If the spirit behind responsible disclosure is to protect users, then why do so many organisations put up brick walls which frustrate researchers into using alternative disclosure methods?

While many large corporates have long standing vulnerability disclosure (also known as “bug bounty”) programs, far too many others either still have not realized the need for one or have no clear process to guide researchers seeking to report identified vulnerabilities.

A cursory search on the websites of a number of large UK organizations revealed that many still do not provide readily accessible information about vulnerability disclosure. Others had the information buried in several layers of obscurity. In one recent example involving a popular high street retail bank, a researcher resorted to speculative searching on LinkedIn to find an appropriate contact to report to.

By making it easier for researchers to report vulnerabilities and by being responsive and collaborative rather than defensive, IT vendors and organizations could potentially deliver more secure software to client organizations, help to build rapport with the security community and help reduce the overall number of yearly security breaches affecting end-users.

For organizations looking to review their vulnerability management programs and implement responsible disclosure, several frameworks exist to help. These include:

General good practice tips for implementing responsible disclosure

A few final considerations for implementing responsible disclosure are listed below:

  • Develop and publish a public-facing vulnerability disclosure and handling policy.
  • Develop and publish guidelines for a bug bounty program, including legal parameters, privacy policies, reward schemes, quality control and submission.
  • Establish processes that make disclosure easier, including publishing up-to-date contact details and training non-technical front-line personnel to appropriately route external communications to resolver groups.
  • Consider using a managed security service if in-house resources and capabilities are thin on the ground.
  • Integrate responsible disclosure with security incident response processes in order to facilitate responsiveness and tracking.
  • Implement a feedback mechanism with researchers to demonstrate progress with remediation and agree timelines for coordinated public disclosure.
  • Lastly, treat security researchers with respect and be transparent.

Security researchers who actively seek to report vulnerabilities will usually find a way to do so regardless of the obstacles placed in their path. Perhaps if more organizations took a clear position on responsible disclosure, we could all benefit in the longer term.

This article is published as part of the IDG Contributor Network. Want to Join?

SUBSCRIBE! Get the best of CSO delivered to your email inbox.