Americas

  • United States

Asia

Oceania

roger_grimes
Columnist

Zero-second exploits

Analysis
May 02, 20085 mins
Data and Information SecuritySecurity

Microsoft SQL server hasn't had a public vulnerability announcement since 2004. The SQL Slammer worm struck in 2005, but the hole the worm exploited had been patched six months before. The holes that MS-Blaster and Code Red worm attacked had been patched, too. But back just a few years ago, no one really cared about patching really. We just didn’t patch. Over the course of malware history, the number of days bet

Microsoft SQL server hasn’t had a public vulnerability announcement since 2004. The SQL Slammer worm struck in 2005, but the hole the worm exploited had been patched six months before. The holes that MS-Blaster and Code Red worm attacked had been patched, too. But back just a few years ago, no one really cared about patching really. We just didn’t patch.

Over the course of malware history, the number of days between a vendor patch being released and the malware exploit being announced has shrunk. Consequently, in today’s Internet-connected, crimeware world, you’ve got to get patched as soon as possible. Most organizations try to get their critical patches applied within one to two weeks. They’d love to do it faster, but regression testing and plain hard-work logistics takes time. Sometimes the public exploit code and malicious worms start attacking within a few days (or in some cases a few hours).

Security defenders have always wondered about how their professional lives would change if the time from the patch being released went from days or hours to seconds. That’s exactly the discussion a paper by several researchers sought to provoke. The paper, “Automatic Patch-Based Exploit Generation is Possible: Techniques and Implications” discussed the plausibility of an engine which could acquire a patch and generate a related exploit within minutes to seconds. Testing with five Microsoft patches, they were able to create an exploit engine that generated exploits within minutes, including one less than 30 seconds.

At first, the authors seem to be stretching the imagination a bit by claiming it would not be difficult to create an exploit engine that worked across a wide number of patches and platforms. The idea is that this APEG (Automatic Patch-Based Exploit Generation) engine could be used by bad guys to quickly generate exploit code and infect the masses before the masses got the patches installed. It seemed like pure fantasy until I realized that it is highly likely that the commercial exploit vendors and government info warfare teams have sophisticated exploit engines that are far more capable than what was presented in the paper. That’s what I love about the computer security field — it’s forever turning imagination into nightmares.

But ignoring for the moment whether such an engine does exist, certainly we have to assume that the time from patch to exploit will continue to decrease over time. So for discussion’s sake, let’s just assume that the crimeware conglomerates make such an engine – one second from patch release to widespread wormed exploit. Would that change how you defend your environment? Would that change how you patch?

One of the paper’s recommendations is to make fast patching available. This idea breaks down immediately under the need (and obligatory wait time) to conduct appropriate regression testing in most environments, which takes days to weeks (at least). Their ideas for obfuscating or encrypting the patch to make it harder for reverse analysis has some merit, but adding some sort of self-unsealing patch means that we will have even more patch failures than we have now. Or do we just get rid of regression testing?

For years, several different teams have been looking into creating host or network-based IDSes that can intercept incoming malicious exploit binaries, and render them harmless. For example, suppose a vulnerability is announced where sending five As in a row (yes, this is a simplistic example, but go with me) causes a buffer overflow in e-mail clients. And either the vulnerability has been publicly disclosed along with proof of concept details, or the patch has been released, but not yet applied (customer is at risk). The idea is the IDS could intercept the incoming malicious data stream, recognize the malicious bytes, and remove them or otherwise render them harmless (or just drop the stream altogether). The client gets protection longer enough to do the appropriate patch regression testing, or perhaps never has to implement the patch because of the offsetting defense.

Snort has offered additional plug-ins to do similar things for years, and was the first network-based IDS I remember that addressed the many challenges faced by any tool trying to do that sort of analysis and response. Microsoft Research has a product called Shield and Generic Application-Level Protocol Analyzer. This particular project claimed research success across 10 protocols at speeds up to 60Mbps, while the analyzers remained memory-safe and DoS-resilient. There’s even an entire company, Bluelane dedicated to this idea. I reviewed their ServerShield appliance in 2006 (called PatchPoint back then) and gave it a pretty good ranking. None of these solutions are perfect. They all have their limitations.

But it begs the question, what should you be planning to do different as the time to patch before the exploit continues to decrease?

I, only slightly humorously, believe that we should fight back with one-second patch engines. I don’t think anyone should get too concerned about automatic exploit engines. I mean the bad guys are being pretty successful without them. Second, if they do ever become a reality, the security defense community has been working on this threat for a long time, and appropriate solutions would come out pretty quickly. Instead of being worried about future attacks, most security administrators should focus on being more consistent on the stuff they can do today to lower security risk.

roger_grimes
Columnist

Roger A. Grimes is a contributing editor. Roger holds more than 40 computer certifications and has authored ten books on computer security. He has been fighting malware and malicious hackers since 1987, beginning with disassembling early DOS viruses. He specializes in protecting host computers from hackers and malware, and consults to companies from the Fortune 100 to small businesses. A frequent industry speaker and educator, Roger currently works for KnowBe4 as the Data-Driven Defense Evangelist and is the author of Cryptography Apocalypse.

More from this author