Software Patching: Patch and Pray

Patching-the only way to prevent poorly designed software from breaking everything-no longer works. And there's nothing you can do about it. Except maybe patch less. Or possibly patch more.

Early one Saturday morning in January, from a computer definitely located somewhere within the seven continents, or possibly on the four oceans, someone sent 376 bytes of code inside a single data packet to a SQL Server. That packet, which would come to be known as the Slammer worm, infected the server by sneaking in through UDP port 1434. From there it generated a set of random IP addresses and scanned them. When it found a vulnerable host, Slammer infected it, and from its new host invented more random addresses that hungrily scanned for more vulnerable hosts.

Slammer was a nasty bugger. In the first minute of its life, it doubled the number of machines it infected every 8.5 seconds. (Just to put that in perspective, back in July 2001, the Code Red virus concerned experts because it doubled its infections every 37 minutes. Slammer peaked in just three minutes, at which point it was scanning 55 million targets per second.)

Then, almost in no time, Slammer started to decelerate, a victim of its own startling efficiency as it bumped into its own scanning traffic. Still, by the 10-minute mark, 90 percent of all vulnerable machines on the planet were infected. But when Slammer subsided, talk focused on how much worse it would have been had Slammer hit on a weekday or, worse, carried a destructive payload.

Talk focused on patching. True, Slammer was the fastest spreading worm in history, but its maniacal binge occurred a full six months after Microsoft had released a patch to prevent it. Those looking to cast blame, and there were many, cried a familiar refrain: If everyone had just patched his system in the first place, Slammer wouldn't have happened.

But that's not true. And therein lies our story.

Slammer was unstoppable. Which points to a bigger issue: Patching no longer works. Partly, it's a volume problem. There are simply too many vulnerabilities requiring too many combinations of patches coming too fast. Picture Lucy and Ethel in the chocolate factoryjust take out the humor.

But perhaps more important and less well understood, it's a process problem. The current manufacturing process for patchesfrom disclosure of a vulnerability to the creation and distribution of the updated codemakes patching untenable. At the same time, the only way to fix insecure post-release software (in other words, all software) is with patches.

This impossible reality has sent patching and the newly minted discipline associated with itpatch managementinto the realm of the absurd. More than a necessary evil, it has become a mandatory fool's errand.

Hardly surprising, then, that philosophies on what to do next have bifurcated. Depending on whom you ask, it's either time to patch lessreplacing the process with vigorous best practices and a little bit of risk analysisor it's time to patch moreby automating the process with, yes, more software.

"We're between a rock and a hard place," says Bob Wynn, CISO of the state of Georgia. "No one can manage this effectively. I can't just automatically deploy a patch. And because the time it takes for a virus to spread is so compressed now, I don't have time to test them before I patch either."

With patching, the only certainty is that CISOs will bear the costs of bringing order to the intractable. In this penny-pinching era, other C-level executives are bound to ask the CISO why this is necessary, at which point someone's gonna have some 'splaining to do.The Learned ArtPatching is, by most accounts, as old as software itself. Unique among engineered artifacts, software is not beholden to the laws of physics in that it can endure fundamental change relatively easily even after it's been "built." Automobile engines don't take to piston redesigns post-manufacture nearly so well.

This unique element of software has contributed to (though is not solely responsible for) the software engineering culture, which generally regards quality and security as obstacles. An adage among programmers suggests that when it comes to software, you can pick only two of three: speed to market, number of features, level of quality. Programmer's egos are wrapped up in the first two; rarely do they pick the third (since, of course, software is so easily repaired later, by someone else).

Such an approach has never been more feckless. Software today is massive (Windows XP contains 45 million lines of code) and the rate of sloppy coding (10 to 20 errors per 1,000 lines of code) has led to thousands of vulnerabilities. CERT published 4,200 new vulnerabilities last yearthat's 3,000 more than it published three years ago. Meanwhile, software continues to find itself running evermore critical business functions, where its failure carries profound implications. In other words, right when quality should be getting better, it's getting exponentially worse.

Stitching patches into these complex systems, which sit within labyrinthine networks of similarly complex systems, makes it impossible to know if a patch will solve the problem it's meant to without creating unintended consequences. One patch, for example, worked fine for everyoneexcept the unlucky users who happened to have a certain Compaq system connected to a certain RAID array without certain updated drivers. In which case the patch knocked out the storage array.

Tim Rice, network systems analyst at Duke University, was one of the unlucky ones. "If you just jump in and apply patches, you get nailed," he says. "Patching is a learned art. You can set up six different systems the same way, apply the same patch to each, and get one system behaving differently."

Raleigh Burns, security administrator at St. Elizabeth's Medical Center, agrees. "Executives think this stuff has a Mickey Mouse GUI, but even chintzy patches are complicated."

The conventional wisdom is that when you implement a patch, you improve things. But Wynn isn't convinced. "We've all applied patches that put us out of service. Plenty of patches actually create more problemsthey just shift you from one vulnerability cycle to another," he says. "It's still consumer beware."

Yet for many who haven't dealt directly with patches, there's a sense that patches are simply click-and-fix. In reality, they're often patch-and-pray. At the very least, they require testing. Some financial institutions, says Shawn Hernan, team leader for vulnerability handling in the CERT Coordination Center at the Software Engineering Institute (SEI), mandate six weeks of regression testing before a patch goes live. Third-party vendors often take months after a patch is released to certify that it won't break their applications.

All of which makes the post-outbreak admonishing to "Patch more vigilantly" farcical and, probably to some, offensive. It's the complexity and fragility, not some inherent laziness or sloppy management, that explains why Slammer could wreak such havoc 185 days after Microsoft released a patch for it.

"We get hot fixes everyday, and we're loath to put them in," says Frank Clark, senior vice president and CIO of Covenant Health Care, whose six-hospital network was knocked out when Slammer hit, causing doctors to revert to paper-based care. "We believe it's safer to wait until the vendor certifies the hot fixes in a service pack."

On the other hand, if Clark had deployed every patch he was supposed to, nothing would have been different. He would have been knocked out just the same.Software Patching: Process HorribilisSlammer neatly demonstrates everything that's wrong with manufacturing software patches. It begins with disclosure of the vulnerability, which happened in the case of Slammer in July 2002, when Microsoft issued patch MS02-039. The patch steeled a file called ssnetlib.dll against buffer overflows.

"Disclosure basically gives hackers an attack map," says Gary McGraw, CTO of Cigital and the author of Building Secure Software. "Suddenly they know exactly where to go. If it's true that people don't patchand they don'tdisclosure helps mostly the hackers."

Essentially, disclosure's a starter's gun. Once it goes off, it's a footrace between hackers (who now know what file to exploit) and everyone else (who must all patch their systems successfully). The good guys never win this race. Someone probably started working on a worm into ssnetlib.dll when Microsoft released MS02-039, or shortly thereafter.

In the case of Slammer, Microsoft built three more patches in 2002MS02-043 in August, MS02-056 in early October and MS02-061 in mid-Octoberfor related SQL Server vulnerabilities. MS02-056 updated ssnetlib.dll to a newer version; otherwise, all of the patches played together nicely.

Then, on October 30, Microsoft released Q317748, a nonsecurity hot fix for SQL Server. Q317748 repaired a performance-degrading memory leak. But the team that built it had used an old, vulnerable version of ssnetlib.dll. When Q317748 was installed, it could overwrite the secure version of the file and thus make that server as vulnerable to a worm like Slammer as one that had never been patched.

"As bad as software can be, at least when a company develops a product, it looks at it holistically," says SEI's Hernan. "It's given the attention of senior developers and architects, and if quality metrics exist, that's when they're used."

And then there are patches.

Patch writing is appropriated to entry-level maintenance programmers, says Hernan. They fix problems where they're found. They have no authority to look for recurrences or to audit code. And the patch coders face severe time constraintsremember there's a footrace on. They don't have time to communicate with other groups writing other patches that might conflict with theirs. (Not that they're set up to communicate. Russ Cooper, who manages NTBugtraq, the Windows vulnerability mailing list, says companies often divide maintenance by product group and let them develop their own tools and strategies for patching.) There's little, if any, testing of patches by the vendors that create them.

Ironically, maintenance programmers write patches using the same software development methodologies employed to create the insecure, buggy code they ostensibly set out to fix. Imagine that 10 people are taught to swim improperly, and one guy goes in the water and starts to drown. Do you want to rely on the other nine to jump in and save him?

From this patch factory comes a poorly written product that can break as much as it fixes. For example, an esoteric flaw found last summer in an encryption programone so arcane it might never have been exploitedwas patched. The patch itself had a gaping buffer overflow written into it, and that was quickly exploited, says Hernan. In another case last April, Microsoft released patch MS03-013 to fix a serious vulnerability in Windows XP. On some systems, it also degraded performance, by roughly 90 percent. The performance degradation required another patch, which wasn't released for a month.

Slammer feasted on such methodological deficiencies. It infected both servers made vulnerable by conflicting patches and severs that were never patched at all because the SQL patching scheme was kludgy. These particular patches required scripting, file moves, and registry and permission changes to install. (After the Slammer outbreak, even Microsoft engineers struggled with the patches.) Many avoided the patch because they feared breaking SQL Server, one of their critical platforms. It was as if their car had been recalled and the automaker mailed them a transmission with installation instructions.Confusion AboundsThe initial reaction to Slammer was confusion on a Keystone Kops scale. "It was difficult to know just what patch applied to what and where," says NTBugtraq's Cooper, who's also the "surgeon general" at vendor TruSecure.

Slammer hit at a particularly dynamic moment: Microsoft had released Service Pack 3 for SQL Server days earlier. It wasn't immediately clear if SP3 would need to be patched (it wouldn't), and Microsoft early on told customers to upgrade their SQL Server to SP3 to escape the mess.

Meanwhile, those trying to use MS02-061 were struggling mightily with its kludginess, and those who had patchedbut got infected and watched their bandwidth sucked down to nothingwere baffled. At the same time, a derivative SQL application called MSDE (Microsoft Desktop Engine) was causing significant consternation. MSDE runs in client apps and connects them back to the SQL Server. Experts assumed MSDE would be vulnerable to Slammer since all of the patches had applied to both SQL and MSDE users.

That turned out to be true, and Cooper remembers a sense of dread as he realized MSDE could be found in about 130 third-party applications. It runs in the background; many corporate administrators wouldn't even know it's there. Cooper found it in half of TruSecure's clients. In fact, at Beth Israel Deaconess Hospital in Boston, MSDE had caused an infestation although the network SQL Servers had been patched. But that's another story for another time.

When customers arrived at work on Monday and booted up their clients, which in turn loaded MSDE, Cooper worried that Slammer would start a re-infestation, or maybe it would spawn a variant. No one knew what would happen. And while patching thousands of SQL Servers is one thing, finding and patching millions of clients with MSDE running is another entirely. Still, Microsoft insisted, if you installed SQL Server SP3, your MSDE applications would be protected.

It seemed like reasonable advice.

1 2 Page 1
Page 1 of 2
7 hot cybersecurity trends (and 2 going cold)