The Vulnerability Disclosure Game: Are We More Secure?

Marcus Ranum asks: Can we speak frankly about "vulnerability disclosure" now? More than a decade into the process, can anyone say security has improved?

Can we speak frankly about "vulnerability disclosure" now? Can we, please? It's long past time. More than a decade into the process, can anyone say security has improved?

Back in the mid-1990s, when the vulnerability disclosure economy was starting to take shape, I was one of a small handful of security practitioners who was trying hard to apply the brakes against what we saw as a dangerous trend. Unfortunately, at that time, the security industry was not yet mature enough for customers to understand that they were being sold a dangerous bill of goods. For longer than a decade, we've lived under the mob rule, where for some security consultants and companies, "marketing" has been replaced by "splashily announcing holes in commercial products to get 20 seconds of fame on CNN." What's amazing about the disclosure game is not that it's been tolerated for so long, but that it worked at all. (See Schneier: Full Disclosure of Security Vulnerabilities a 'Damned Good Idea'.)

Do you remember the original premise of the disclosure game? By publicly announcing vulnerabilities in products we will force the vendors to be more responsive in fixing them, and security will be better. Remember that one? Tell me, dear reader, after 10 years of flash-alerts, rushed patch cycles and zero-day attacks, do you think security has gotten better?

I think there are a few places where we can see signs of improvement. I know that Microsoft, Oracle and others have spent huge amounts of money improving the security of their software. Never mind the fact that 99.99 percent of the computer users in the world would rather they had spent that money making their software cheaper or faster, I suppose it's a great thing to see that software security is being taken seriously. Security has gotten more expensive. But do you think security has gotten better?

From where I sit, it looks like the vulnerability rate is pretty much a constant. If the proponents of disclosure were right, their stated objective—browbeating the vendors into making their products better—would have been accomplished years ago. But we're speaking frankly, here, aren't we? So, as one adult to another, let me tell you why it won't work: because it was never about making software better. In fact, it was never about making your security better. That's right. Now that we can look back at 10 years of what disclosure has brought us, it's brought us—well, nothing much. Nothing much, that is, except a grey-market economy in exploits, where independent "vulnerability researchers" attempt to cash in by finding new attacks that they can sell to security companies or spyware manufacturers—whichever bids higher. Nothing much unless you count the massive amounts of "free" marketing exposure for companies that trade in exploits. The sad part about it all is that they've managed to convince you they're doing you a favor. It looks like a pretty expensive-looking "favor" to me!

Back when the Internet security bubble started, I offered a litmus test for practitioners. Simply put: You're either part of the solution, or you're part of the problem. You're writing the next firewall or secure application or working to improve some site's security. Or you're part of the problem: You're looking for the next hole in Oracle that'll get you two minutes on CNN, or you're getting ready to announce a clever new way rootkits can evade detection from security tools, or you're devising the next denial-of-service attack, etc. The state of ethics in the computer security industry is pathetic; it's on par with where medicine was in the 1820s—except that some of the snake-oil salesmen in the 1820s actually believed in their products.

At this point in the history of security, the disclosure economy has been in place long enough that some of the new entrants to the field think that's the way it's always been—I've run into second-generation "true believers" who really think vulnerability disclosure is all about making software better. Guys, I think it's time to hang up that ideology; it's obviously not true. If it was going to help, it would have showed some signs of helping by now. So let's be frank, shall we? Those of you who are playing the disclosure game are just playing for your two minutes of fame: You're not making software better. Sure, some of you work for consultancies and startups, and it saves you a ton of money by not having to have a marketing budget, but isn't shouting "fire!" in a crowded theater so&um, '90s? I know that the typical security customer is (to you) an unsophisticated rube, but that does not justify you placing them at increased risk just so you can publish a new signature for your pen-testing tool or get your funny-haired "chief hacking officer" on CNN one more time. I have news for you: Most of the computer users on the planet wish you'd find some other use for your talents—something that actually does help.

Computer security needs to grow the hell up, and needs to do it pretty quickly. It seems that virtually every aspect of life is becoming increasingly computerized and exposed to online attack. The problem is getting more significant the longer we wait to deal with it, but the early history of computer security has been a massive disappointment to all of us: huge amounts of money spent with relatively little improvement to show for it. One of the reasons is that a huge amount of that effort has been wasted, barking up the wrong tree. Unfortunately, if you look at the last 10 years of security, it's a litany of "one step forward, one step back," thanks in part to the vulnerability pimps, parasites and snake-oil salesmen who flocked into the industry when they smelled money and a chance to get some attention. At this point, they're so deeply entrenched and vested that they're here to stay, unless the industry as a whole turns away from rewarding bad behavior. If you're a customer or end user, you can see how well disclosure worked to improve your security over the last decade. Let me be frank: It's up to you.

Marcus Ranum, CSO of Tenable Network Security, is internationally recognized as one of computer security's visionary thinkers. Since his early involvement with security in the late 1980s he has been involved in every stage of the security industry, from coding the first commercial firewall (DEC SEAL) to acting as founder and CEO of one of the early IDS innovators (NFR). He lives in the middle of nowhere in Pennsylvania.

Join the discussion
Be the first to comment on this article. Our Commenting Policies