• United States



Application Security: Is the Backdoor Threat the Next Big Threat to Applications?

Dec 18, 20077 mins
Application Security

Risk rarely disappears; it migrates. Thus improvements in spam filters don’t reduce spam, but force it to move somewhere else–to images, or MP3s or PDF files. The same holds true for information security vulnerabilities in general, but figuring out where the risk will move, and how, is trickier. Chris Wysopal, security researcher who is now with a vendor called Veracode, believes he’s caught one of those migrations in progress. As detection and scanning technology gets better at finding the accidental coding errors like buffer overflows, Wysopal believes the malicious will turn more and more to using backdoors–holes in programs usually intentionally programmed in to allow access to an application.

It wasn’t actually really his idea. “We had many CSOs and security folks asking us if we could scan for backdoors,” says Wysopal. “We didn’t have scans at the time. So I just started looking around. I went to papers, mailing lists, just looking for anything I could find. It turns out there was very little real academic research on backdoors. A lot of government work would say, ’Step one, look for backdoors,’ but it never said how, or what to look for. I decided this research needed to happen.”

And that’s what Wysopal’s been up to–building up some basic research and a taxonomy of backdoors. CSO caught up with Wysopal to see how that research is going, what he’s discovered about backdoors in open source versus closed source software, and why we should assume backdoors are being planted in software.

CSO: First, let’s define what we’re talking about here. When you use the term “backdoor,” what do you mean?

Chris Wysopal photo
Wysopal: We split them into three types. Crypto backdoors are when someone designs crypto that they can come back to and easily break. Then there are system backdoors–that’s the rootkit phenomenon, when an attacker finds a vulnerability, gets root access and then installs a rootkit for continuing access. But the one we were focused on is the application backdoor. This is when the software is being developed legitimately, but someone has subverted the development process and has modified that legitimate application with code that is not supposed to be there. All of our research focused on this last category. Our thesis is you can’t just look for standard vulnerabilities, which are essentially developer mistakes. You have to look for other risks that are intentionally put in code or sometimes put in but meant to be removed before production. Some backdoors are planted maliciously and developers hope they make it into production, while others are accidents. They aren’t meant to go into the final code.

CSO: How do you research that?

Wysopal: We went back through 20 years of backdoors that had been made public, from Borland a long time ago to Linux backdoors to recent ones with WordPress and other content management systems. Then we looked at several categories those applications backdoors fell into and drove down into those.

CSO: What are those categories?

Wysopal: The most common is a special credential backdoor. If you know about it being there, you can authenticate with that. It’s always there. It gives you a privileged account. But it’s also nice and easy to look for statically; these guys stand out like a sore thumb, so they’re easier to find.

CSO: So are they the most common because they are, in fact, the most common, or because they’re ones you most commonly found?

Wysopal: Right now, any kind of prevalence is based on detection accuracy, so, yes, there may be a more common type of backdoor that’s harder to detect. It’s still rough. A lot of the backdoors are detected manually. Some come to light by accident, because source code is changed incorrectly or someone’s doing manual code review to understand code and they happen to see it. We found some in application appliances that were probably left in on purpose for support reasons. It’s actually hard to tell whether it was a malicious insider who planted the backdoor or a poorly designed support function.

CSO: But these backdoors that some developer plants are still pretty rare, right?

Wysopal: It’s hard to say, but it seems like they’re becoming more common. Coincidentally, in the fall the Department of Defense released a paper on the topic. They gave many pages to the idea that once you become a high-value target and you’ve spent a lot of money securing yourself, the threat of someone paying to bribe someone on a development team in your coding supply chain increases dramatically. It becomes a substantial risk; it becomes the weak link because you’ve spent so much to secure the other avenues.

CSO: So, more background checks on your coders?

Wysopal: It sounds like a solution, but the problem is, it might be easy to do with your coders or the coders at your outsourced company, but the development supply chain is so much more complex than that. Code changes hands many times. Those guys are buying or downloading off-the-shelf libraries. It gets really complex to the point you don’t really know the origin of all the code in your development chain.

CSO: One can assume that if someone is motivated enough to pay off a coder to install backdoors, they’re motivated to make it incredibly difficult to detect those backdoors.

Wysopal: It’s a pretty good guess that many are doing it so well you’ll have a hard time finding the backdoor in the first place. But those guys have a balance to strike. One way you detect backdoors is detecting ways a backdoor is being hidden. If you try too hard to hide it, you leave signs that you’re hiding something. I believe this will eventually become an arms race just like anti-virus, with both sides continually improving their detection and evasion.

CSO: What else did you learn in your research?

Wysopal: The lifetime of a backdoor in open source is very short. It’s measured in weeks. The lifetime of a backdoor in closed source is measured in years. The many eyes concept of open source is working to detect backdoors. We found that in most open source cases, the malicious or accidental opening was detected in a matter of days, sometimes a few weeks. But every backdoor in the binary of proprietary software was there for years or an indeterminate length of time. It never happened that closed source backdoors were discovered in months. With an old one, Borland Interbase, we saw seven years worth of versions where a backdoor was there.

CSO: It can’t be that simple. You may detect backdoors more quickly with open source, but with so many people manipulating open source code, the number of backdoors to detect must be exponentially higher than proprietary systems, and the potential virulence, of spreading backdoors, must be much higher with open source?

Wysopal: Well, when we looked at special credential backdoors, the four biggest were all closed source products.

CSO: Anything else surprise you?

Wysopal: I gave a presentation at ACSAC 23 (Annual Computer Security Applications Conference 23). There were lots of guys there from the NSA and the like. And the crowd for this workshop pretty much assumed that when you get to the nation-state level, you have malicious programmers embedded in the process. The question of why wouldn’t the CIA or NSA do this was almost rhetorical. And those guys assume other nations are doing it, too. From an intelligence standpoint, it makes sense. I’m not trying to sound like chicken little, but it’s something to think about.

Executive Editor Scott Berinato can be reached at

Related links:

Read more about Chris Wysopal in our in-depth coverage of a 90s hacking group to which he belonged: “L0pht in Transition: Most of the ’90s hacking group have emerged in legitimate roles. Was their work ultimately boon or bane for security?”

Editor’s note: The comment field below does not work. Please send your feedback directly to the author.