Last February at Purdue University, a student taking "cs390s—Secure Computing" told his professor, Dr. Pascal Meunier, that a Web application he used for his physics class seemed to contain a serious vulnerability that made the app highly insecure. Such a discovery didn't surprise Meunier. "It's a secure computing class; naturally students want to discover vulnerabilities."
They probably want to impress their prof, too, who's a fixture in the vulnerability discovery and disclosure world. Dr. Meunier has created software that interfaces with vulnerability databases. He created ReAssure, a kind of vulnerability playground, a safe computing space to test exploits and perform what Meunier calls "logically destructive experiments." He sits on the board of editors for the Common Vulnerabilities and Exposures (CVE) service, the definitive dictionary of all confirmed software bugs. And he has managed the Vulnerabilities Database and Incident Response Database projects at Purdue's Center for Education and Research in Information and Assurance, or Cerias, an acronym pronounced like the adjective that means "no joke."
Also read Marcus Ranum's dissection of whether vulnerability disclosure makes us more secure
When the undergraduate approached Meunier, the professor sensed an educational opportunity and didn't hesitate to get involved. "We wanted to be good citizens and help prevent the exploit from being used," he says. In the context of vulnerable software, it would be the last time Meunier decided to be a good citizen.
Meunier notified the authors of the physics department application that one of his students—he didn't say which one—had found a suspected flaw, "and their response was beautiful," says Meunier. They found, verified and fixed the bug right away, no questions asked.
But two months later, in April, the same physics department website was hacked. A detective approached Meunier, whose name was mentioned by the staff of the vulnerable website during questioning. The detective asked Meunier for the name of the student who had discovered the February vulnerability. The self-described "stubborn idealist" Meunier refused to name the student. He didn't believe it was in that student's character to hack the site and, furthermore, he didn't believe the vulnerability the student had discovered, which had been fixed, was even connected to the April hack.
The detective pushed him. Meunier recalls in his blog: "I was quickly threatened with the possibility of court orders, and the number of felony counts in the incident was brandished as justification for revealing the name of the student." Meunier's stomach knotted when some of his superiors sided with the detective and asked him to turn over the student. Meunier asked himself: "Was this worth losing my job? Was this worth the hassle of responding to court orders, subpoenas, and possibly having my computers (work and personal) seized?" Later, Meunier recast the downward spiral of emotions: "I was miffed, uneasy, disillusioned."
This is not good news for vulnerability research, the game of discovering and disclosing software flaws. True, discovery and disclosure always have been contentious topics in the information security ranks. For many years, no calculus existed for when and how to ethically disclose software vulnerabilities. Opinions varied on who should disclose them, too. Disclosure was a philosophical problem with no one answer but rather, schools of thought. Public shaming adherents advised security researchers, amateurs and professionals alike to go public with software flaws early and often and shame vendors into fixing their flawed code. Back-channel disciples believed in a strong but limited expert community of researchers working with vendors behind the scenes. Many others' disclosure tenets fell in between.
Still, in recent years, with shrink-wrapped software, the community has managed to develop a workable disclosure process. Standard operating procedures for discovering bugs have been accepted and guidelines for disclosing them to the vendor and the public have fallen into place, and they have seemed to work. Economists have even proved a correlation between what they call "responsible disclosure" and improved software security.
But then, right when security researchers were getting good at the disclosure game, the game changed. The most critical code moved to the Internet, where it was highly customized and constantly interacting with other highly customized code. And all this Web code changed often, too, sometimes daily. Vulnerabilities multiplied quickly. Exploits followed.
But researchers had no counterpart methodology for disclosing Web vulnerabilities that mirrored the system for vulnerability disclosure in off-the-shelf software. It's not even clear what constitutes a vulnerability on the Web. Finally, and most serious, legal experts can't yet say whether it's even legal to discover and disclose vulnerabilities on Web applications like the one that Meunier's student found.
To Meunier's relief, the student volunteered himself to the detective and was quickly cleared. But the effects of the episode are lasting. If it had come to it, Meunier says, he would have named the student to preserve his job, and he hated being put in that position. "Even if there turn out to be zero legal consequences" for disclosing Web vulnerabilities, Meunier says, "the inconvenience, the threat of being harassed is already a disincentive. So essentially now my research is restricted."
He ceased using disclosure as a teaching opportunity as well. Meunier wrote a five-point don't-ask-don't-tell plan he intended to give to cs390s students at the beginning of each semester. If they found a Web vulnerability, no matter how serious or threatening, Meunier wrote, he didn't want to hear about it. Furthermore, he said students should "delete any evidence you knew about this problem...go on with your life," although he later amended this advice to say students should report vulnerabilities to CERT/CC.
A gray pall, a palpable chilling effect has settled over the security research community. Many, like Meunier, have decided that the discovery and disclosure game is not worth the risk. The net effect of this is fewer people with good intentions willing to cast a necessary critical eye on software vulnerabilities. That leaves the malicious ones, unconcerned by the legal or social implications of what they do, as the dominant demographic still looking for Web vulnerabilities.
The Rise of Responsible Disclosure
In the same way that light baffles physicists because it behaves simultaneously like a wave and a particle, software baffles economists because it behaves simultaneously like a manufactured good and a creative expression. It's both product and speech. It carries the properties of a car and a novel at the same time. With cars, manufacturers determine quality largely before they're released and the quality can be proven, quantified. Either it has air bags or it doesn't. With novels (the words, not the paper stock and binding), quality depends on what consumers get versus what they want. It is subjective and determined after the book has been released. Moby-Dick is a high-quality creative venture to some and poor quality to others. At any rate, this creates a paradox. If software is both scientifically engineered and creatively conjured, its quality is determined both before and after it's released and is both provable and unprovable.
In fact, says economist Ashish Arora at Carnegie Mellon University, it is precisely this paradox that leads to a world full of vulnerable software. "I'm an economist so I ask myself, Why don't vendors make higher quality software?" After all, in a free market, all other things being equal, a better engineered product should win over a lesser one with rational consumers. But with software, lesser-quality products, requiring massive amounts of repair post-release, dominate. "The truth is, as a manufactured good, it's extraordinarily expensive [and] time-consuming [to make it high quality]." At the same time, as a creative expression, making "quality" software is as indeterminate as the next best-seller. "People use software in so many ways, it's very difficult to anticipate what they want.
"It's terrible to say," Arora concedes, "but in some ways, from an economic perspective, it's more efficient to let the market tell you the flaws once the software is out in the public." The same consumers who complain about flawed software, Arora argues, would neither wait to buy the better software nor pay the price premium for it if more-flawed, less-expensive software were available sooner or at the same time. True, code can be engineered to be more secure. But as long as publishing vulnerable software remains legal, vulnerable software will rule because it's a significantly more efficient market than the alternative, high-security, low-flaw market.
The price consumers pay for supporting cheaper, buggy software is they become an ad hoc quality control department. They suffer the consequences when software fails. But vendors pay a price, too. By letting the market sort out the bugs, vendors have ceded control over who looks for flaws in their software and how flaws are disclosed to the public. Vendors can't control how, when or why a bug is disclosed by a public full of people with manifold motivations and ethics. Some want notoriety. Some use disclosure for corporate marketing. Some do it for a fee. Some have collegial intentions, hoping to improve software quality through community efforts. Some want to shame the vendor into patching through bad publicity. And still others exploit the vulnerabilities to make money illicitly or cause damage.
"Disclosure is one of the main ethical debates in computer security," says researcher Steve Christey. "There are so many perspectives, so many competing interests, that it can be exhausting to try and get some movement forward."
What this system created was a kind of free-for-all in the disclosure bazaar. Discovery and disclosure took place without any controls. Hackers traded information on flaws without informing the vendors. Security vendors built up entire teams of researchers whose job was to dig up flaws and disclose them via press release. Some told the vendors before going public. Others did not. Freelance consultants looked for major flaws to make a name for themselves and drum up business. Sometimes these flaws were so esoteric that they posed minimal real-world risk, but the researcher might not mention that. Sometimes the flaws were indeed serious, but the vendor would try to downplay them. Still other researchers and amateur hackers tried to do the right thing and quietly inform vendors when they found holes in code. Sometimes the vendors chose to ignore them and hope security by obscurity would protect them. Sometimes, Arora alleges, vendors paid mercenaries and politely asked them to keep it quiet while they worked on a fix.
Vulnerability disclosure came to be thought of as a messy, ugly, necessary evil. The madness crested, famously, at the Black Hat hacker conference in Las Vegas in 2005, when a researcher named Michael Lynn prepared to disclose to a room full of hackers and security researchers serious flaws in Cisco's IOS software, the code that controls many of the routers on the Internet. His employer, ISS (now owned by IBM) warned him not to disclose the vulnerabilities. So he quit his job. Cisco in turn threatened legal action and ordered workers to tear out pages from the conference program and destroy conference CDs that contained Lynn's presentation. Hackers accused Cisco of spin and censorship. Vendors accused hackers of unethical and dangerous speech. In the end, Lynn gave his presentation. Cisco sued. Lynn settled and agreed not to talk about it anymore.
The confounding part of all the grandstanding, though, was how unnecessary it was. In fact, as early as 2000, a hacker known as Rain Forest Puppy had written a draft proposal for how responsible disclosure could work. In 2002, researchers Chris Wysopal and Christey picked up on this work and created a far more detailed proposal. Broadly, it calls for a week to establish contact between the researcher finding a vulnerability and a vendor's predetermined liaison on vulnerabilities. Then it gives the vendor, as a general guideline, 30 days to develop a fix and report it to the world through proper channels. It's a head-start program, full disclosure—delayed. It posits that a vulnerability will inevitably become public, so here's an opportunity to create a fix before that happens, since the moment it does become public the risk of exploit increases. Wysopal and Christey submitted the draft to the IETF (Internet Engineering Task Force), where it was well-received but not adopted because it focused more on social standards, not technical ones.
Still, its effects were lasting, and by 2004, many of its definitions and tenets had been folded into the accepted disclosure practices for shrink-wrapped software. By the time Lynn finally took the stage and disclosed Cisco's vulnerabilities, US-CERT, Mitre's CVE dictionary (Christey is editor), and Department of Homeland Security guidelines all used large swaths of Wysopal's and Christey's work.