Americas

  • United States

Asia

Oceania

Patching Software: The Big Fix

Feature
Oct 07, 200219 mins
Application SecurityEnterprise ApplicationsPatch Management Software

Let’s start where conversations about software usually end: Basically, software sucks.

In fact, if software were an office building, it would be built by a thousand carpenters, electricians and plumbers. Without architects. Or blueprints. It would look spectacular, but inside, the elevators would fail regularly. Thieves would have unfettered access through open vents at street level. Tenants would need consultants to move in. They would discover that the doors unlock whenever someone brews a pot of coffee. The builders would provide a repair kit and promise that such idiosyncrasies would not exist in the next skyscraper they build (which, by the way, tenants will be forced to move into).

Strangely, the tenants would be OK with all this. They’d tolerate the costs and the oddly comforting rhythm of failure and repair that came to dominate their lives. If someone asked, “Why do we put up with this building?” shoulders would be shrugged, hands tossed and sighs heaved. “That’s just how it is. Basically, buildings suck.”

The absurdity of this is the point, and it’s universal, because the software industry is strangely irrational and antithetical to common sense. It is perhaps the first industry ever in which shoddiness is not anathemait’s simply expected. In many ways, shoddiness is the goal. “Don’t worry, be crappy,” Guy Kawasaki wrote in 2000 in his book, Rules for Revolutionaries: The Capitalist Manifesto for Creating and Marketing New Products and Services. “Revolutionary means you ship and then test,” he writes. “Lots of things made the first Mac in 1984 a piece of crapbut it was a revolutionary piece of crap.”

The only thing more shocking than the fact that Kawasaki’s iconoclasm passes as wisdom is that executives have spent billions of dollars endorsing it. They’ve investedand reinvestedin software built to be revolutionary and not necessarily good. And when those products fail, or break, or allow bad guys in, the blame finds its way everywhere except to where it should go: on flawed products and the vendors that create them.

“We’ve developed a culture in which we don’t expect software to work well, where it’s OK for the marketplace to pay to serve as beta testers for software,” says Steve Cross, director and CEO of the Software Engineering Institute (SEI) at Carnege Mellon University. “We just don’t apply the same demands that we do from other engineered artifacts. We pay for Windows the same as we would a toaster, and we expect the toaster to work every time. But if Windows crashes, well, that’s just how it is.”

Application securityuntil now an oxymoron of the highest order, like “jumbo shrimp”is why we’re starting here, where we usually end. Because it’s finally changing.

A complex set of factors is conspiring to create a cultural shift away from the defeatist tolerance of “that’s just how it is” toward a new era of empowerment. Not only can software get better, it must get better, say executives. They wonder, Why is software so insecure? and then, What are we doing about it?

In fact, there’s good news when it comes to application security, but it’s not the good news you might expect. In fact, application security is changing for the better in a far more fundamental and profound way. Observers invoke the automotive industry’s quality wake-up call in the ’70s. One security expert summed up the quiet revolution with a giddy, “It’s happening. It’s finally happening.”

Even Kawasaki seems to be changing his rules. He says security is a migraine headache that has to be solved. “Don’t tell me how to make my website cooler,” he says. “Tell me how I can make it secure.”

“Don’t worry, be crappy” has evolved into “Don’t be crappy.” Software that doesn’t suck. What a revolutionary concept.

Why Is Software So Insecure?

Software applications lack viable security because, at first, they didn’t need it. “I graduated in computer science and learned nothing about security,” says Chris Wysopal, technical director at security consultancy @Stake. “Program isolation was your security.”

The code-writing trade grew up during an era when only two things mattered: features and deadlines. Get the software to do something, and do it as fast as possible. Cyra Richardson, a developer at Microsoft for 12 years, has written code for most of the company’s major pieces of software, including Windows 3.1. “The measure of a great app then was that you did the most with the fewest resources”memory, lines of code, development hours, she says. So no one built secure applications, but no one asked for them either. Windows 3.1 was “a program made up almost entirely of customers’ grassroots demands for features to be delivered as soon as possible,” Richardson recalls.

Networking changed all that. It allowed someone to hack away at your software from somewhere else, mostly undetected. But it also meant that more people were using computers, so there was more demand for software. That led to more competition. Software vendors coded franticallyunder the insecure pedagogyto outwit competitors with more features sooner. That led to what one software developer called “featureitis.” Inflammation of the features.

Now, features make software do something, but they don’t stop it from unwittingly doing something else at the same time. E-mail attachments, for example, are a feature. But e-mail attachments help spread viruses. That is an unintended consequenceand the more features, the more unintended consequences.

As networking spread and featureitis took hold, some systems were compromised. The worst case was in 1988 when a graduate student at Cornell University set off a worm on the ARPAnet that replicated itself to 6,000 hosts and brought down the network. At the time, events like that were the exception.

By 1996, the Internet supported 16 million hosts. Application securityor, more specifically, the lack of itturned exponentially worse. The Internet was a joke in terms of security, easily compromised by dedicated attackers. Teenagers were cracking anything they wanted to: NASA, the Pentagon, the Mexican finance ministry. The odd part is, while the world changed, software development did not. It stuck to its features/deadlines culture despite the security problem.

Even today, the software development methodologies most commonly used still cater to deadlines and features, and not security. “We have a really smart senior business manager here who controls a large chunk of this corporation but hasn’t a clue what’s necessary for security,” says an information security officer at one of the largest financial institutions in the world. “She looks at security as, Will it cost me customers if I do it? She concludes that requiring complicated, alphanumeric passwords means losing 12 percent of our customers. So she says no way.”

Software development has been able to maintain its old-school, insecure approach because the technology industry adopted a less-than-ideal fix for the problem: security applications, a multibillion-dollar industry’s worth of new code to layer on top of programs that remain foundationally insecure. But there’s an important subtlety. Security features don’t improve application security. They simply guard insecure code and, once bypassed, can allow access to the entire enterprise.

That’s triage, not surgery. In other words, the industry has put locks on the doors but not on the loading dock out back. Instead of securing networking protocols, firewalls are thrown up. Instead of building e-mail programs that defeat viruses, antivirus software is slapped on.

When the first major wave of Internet attacks hit in early 2000, security software was the savior, brought in at any expense to mitigate the problem. But attacks kept coming, and more recently, security software has lost much of its original appeal. Thatcombined with a bad economy, a new focus on national security, pending regulation that focuses on securing information and sheer fatigue from the constant barrage of attacksspurred CSOs to think differently about how to fix the security problem.

In addition, a bevy of new research was published that proves there is an ROI for vendors and users in building more secure code. Plus, a new class of software tools was developed to automatically ferret out the most gratuitous software flaws.

Put it all together, and you getta da!change. And not just change, but profound change. In technology, change usually means more features, more innovation, more services and more enhancements. In any event, it’s the vendor defining the change. This time, the buyers are foisting on vendors a better kind of change. They’re forcing vendors to go back and fix the software that was built poorly in the first place. The suddenly efficacious corporate software consumer is holding vendors accountable. He is creating contractual liability and pushing legislation. He is threatening to take his budget elsewhere if the code doesn’t tighten up. And it’s not just empty rhetoric.

Mary Ann Davidson, CSO at Oracle, claims that now “no one is asking for features; they want information assurance. They’re asking us how we secure our code.” Adds Scott Charney, chief security strategist at Microsoft, “Suddenly, executives are saying, We’re no longer just generically concerned about security.”

So What Are We Doing About It?

Specifically, all this concern has led to the empowerment of everyone who uses software, and now they’re pushing for some real application security. Here are the reasons why.

Vendors have no excuse for not fixing their software because it’s not technically difficult to do. For anyone who bothers to look, the numbers are overwhelming: 90 percent of hackers tend to target known flaws in software. And 95 percent of those attacks, according to SEI’s Cross, among others experts, exploit one of only seven types of flaws. (See “Common Vulnerabilities,” opposite page.) So if you can take care of the most common types of flaws in a piece of software, you can stop the lion’s share of those attacks. In fact, if you eliminate the most common security hole of allthe dreaded buffer overflowCross says you’ll scotch nearly 60 percent of the problem right there.

“It frustrates me,” says Cross. “It was kind of chilling when we realized half-a-dozen vulnerabilities were causing most of the problems. And it’s not complex stuff either. You can teach any freshman compsci student to do it. If the public understood that, there would be an outcry.”

SEI and others such as @Stake are shining a light on these startling facts (and making money in doing so). It has started to have an effect. Wysopal at @Stake says he’s seeing more empowered and proactive customers, and in turn, vendors are desperately seeking ways to keep those empowered customers.

“It’s been a big change,” he says. “We still get a lot of [customers saying], We’re shipping in a week. Could you look at the app and make sure it’s secure? But we’re seeing more clients sooner in the development process. Security always was the thing that delayed shipment, but they’ve started to see the benefitsbetter communication between developers, creating more robust applications that have fewer failures. The truth is, it doesn’t take that much longer to write a line of code that doesn’t have a buffer overflow than one that does. It’s just building awareness into the process so that, eventually, your developers simply don’t write buffers with unbounded strings.”

In fact, it’s a little more complicated than that. Even if, starting tomorrow, no new programs contained buffer overflows (and, of course, it will take years of training and development to minimize buffer overflows), there’s billions of lines of legacy code out there containing 300 variations on the buffer-overflow theme. What’s more, in a program with millions of lines of code, there are thousands of instances of buffer overflows. They are needles in a binary haystack.

Fortunately, some enterprising companies have built tools that automate the process of finding the buffers and fixing the software. The class of tool is called secure scanning or application scanning, and the effect of such tools could be profound. They will allow CSOs to, basically, audit software. They’ve already become part of the security auditing process, and there’s nothing to stop them from becoming part of the application sales process too. Wysopal tells the story of a CSO who brought him a firewall for vulnerability testing and scanning. When a host of serious flaws were found, the customer literally sent the product back to the vendor and, in so many words, said, If you want us to buy this, fix these vulnerabilities. To preserve the sale, the vendor fixed the firewall.

Strong contracts are making software better for everyone. According to @Stake research, vendors should realize that there’s an ROI in designing security into software earlier rather than later. But Wysopal believes that’s not necessarily the only motivation for companies to improve their code’s safety. “I think they also see the liability coming,” he says. “I think they see the big companies building it into contracts.”

A contract GE signed with software vendor General Magic Inc. earlier this year has security officers and experts giddy and encouraged by its language (see “Put It in Writing,” this page). In essence it holds General Magic fully accountable for security flaws and dictates that the vendor pay for fixing the flaws.

General Magic officials say they weren’t surprised by the language in the contract, but many experts say the company has to be pretty confident in its products to sign off. The effect of the contract, though, is to improve software in general. The vendor must make secure applicationsor fix them so they’re secureto conform to its contract with a customer, but that makes the software better for everyone.

Clout is not limited to the Fortune 500. Sure, it’s easy for GE to write such a contract, given that GE is part of the Fortune 2. And there’s nothing wrong with CSOs benefiting from GE’s cloutthe corporate equivalent of drafting in auto racing.

But there are other ways to force the issue with vendors for CSOs at companies smaller than GE (which is everyone but Wal-Mart). One can join the Sustainable Computing Consortium at Carnegie Mellon University, and the Internet Security Alliance, formed under the Electronic Industry Alliance. The interest groups help companies of all sizes band together on standardizing contract language and best practices for software development.

Some are taking satisfaction in a good old-fashioned boycott, even if they are so small as to escape the vendor’s notice. Newnham College at the University of Cambridge in England, with 700 users, recently banned Microsoft’s Outlook from use on campus because of the virus problem.

Much of the clout CSOs gain will come from the market evolving. In a sense, the software makers create clout for the CSO by asking her to deploy the product for ever more critical business tasks. At some point, the potential damage an insecure product could inflict will dictate whether it will be purchased.

“Two years ago, the marketing strategy was to just get it out there. And some of the stuff that went out was really insecure,” says the anonymous ISO at the large financial institution. “But now, we just say, applications don’t go live without security. It’s a sledgehammer.”

And it’s not a randomly wielded one either. His company has created a formal process to assess vendors’ applications and his own company’s software development as well. It includes auditing and penetration testing, and the vendors’ conforming to overarching security criteria, such as eliminating buffer overflows and so forth. It’s not unusual, the security officer says, for his group to spend $40,000 per quarter testing and breaking a single application.

“Customers are vetting us,” says Davidson. “Not just kicking the tires, but they’re asking how we handle vulnerabilities. Where is our code stored? Do we do regression testing? What are our secure coding standards? It’s impressive, but it’s also just plain necessary.

“They have to be demanding. If customers don’t make security a basic criteria, they lose their right to complain in a lot of ways when things go bad,” she says.

At the bank, the security officer says, is a running list of vendors that are “certified”that is, they’ve successfully met the application security criteria by going through the formal process. The list is incentive for vendors to clean up their code, because if they’re certified, they have an advantage over those that aren’t the next time they want to sell software. Vendors, he says, “have either gone broke trying to satisfy our criteria, or they run through the operation pretty well. A few see what we demand and just run away. But there doesn’t seem to be any middle ground.”

The government is taking an active role. The image of the government in security is that of a clumsy organization tripping over its own red tape. But right now, at least in terms of application security, the government is a driving force, and the government’s efforts to improve software are making a joke of the private sector.

In fact, no industry has been more effective in the past year at pushing vendors into security or using its clout (often, that comes in the form of regulation) to effect change.

At the state level, legislatures have collectively ignored the Uniform Computer Information Transactions Act (UCITA), a complex law that would in part reduce liability for software vendors (most major vendors have backed UCITA).

Federally, money has poured into the complex skein of agencies dealing with critical infrastructure protection, which has taken on a life of its own since 9/11. Equally important but not as well publicized, the feds fully implemented in July the National Security Telecommunications Information Systems Security Policy no. 11, called NSTISSP (pronounced nissTISSip), after a two-year phase-in. The policy dictates that all software that’s in some way used in a national security setting must pass independent security audits before the government will purchase it.

The government has for more than a decade tried to implement such a policy, but it has been put off. Vendors have routinely been able to receive waivers through loopholes in order to avoid the process. The July move is considered a line in the sand. With national security on everyone’s mind, experts believe waivers will be harder to come by. The Navy is telling kvetching vendors to use NSTISSP no. 11 as a way to gain a competitive advantage. At any rate, products will have to be secured, or the government won’t buy them. Like GE’s contract, this makes software better for everyone.

The ability of the public sector to whip vendors into shape on application security is best represented, though, by John Gilligan, CIO of the Air Force, who in March told Microsoft to make better products or he’ll take his $6 billion budget elsewhere. It was a challenge by proxy to all software vendors. At the time, Gilligan said he was “approaching the point where we’re spending more money to find patches and fix vulnerabilities than we paid for the software.” And he wasn’t shy about labeling software security a “national security issue.”

Microsoft Chief Security Strategist Charney called himself a “nudge and a pest by nature,” and he may have found his counterpart in Gilligan, who in addition to mobilizing the Air Force is encouraging other federal agencies to use similar tactics. Gilligan says he was encouraged by Bill Gates’s notorious “Trustworthy Computing” memohis mea culpa proclamation in January that Microsoft software must get more securebut that “the key will be, what’s the follow-through?”

Nudging Vendors

Gilligan is right, and clever, to invoke patches as a major part of his problem. If a vendor is not convinced that securing applications is a good idea after getting proof of an ROI from securing applications early, or after gaining the favor of large customers by submitting to a certification process or to a contract with strong language, then patches might do the trick.

Patches are like ridiculously complex tourniquets. They are the terrible price everyonevendors and CSOs alikepays for 30 years of insecure application development. And they are expensive. Davidson at Oracle estimates that one patch the company released cost Oracle $1 million. Charney won’t estimate. But what’s clear is that the economics of patching is quickly getting out of hand, and the vendors appear to be motivated to ameliorate the problem.

At Microsoft, it starts with security training, required for all Microsoft programmers as a result of Gates’s memo. Michael Howard, coauthor of Writing Secure Code, and Steve Lipner, manager of Microsoft’s security center (Patch Central), are running the effort to make Microsoft software more secure.

The training establishes new processes (coding through defense in depth, that is, writing your piece of code as if everything around your code will fail). It sets new rules (security goals now go in requirements documents at Microsoft; insecure drivers are summarily removed from programs, a practice that Richardson says would have been heresy not long ago). And it creates a framework for introducing Microsoft teams to the concept of managed code (essentially, reusable code that comes with guarantees about its integrity).

A year and several hundred million dollars later, it’s still not clear if the two-day security training for Microsoft’s developers is giving them a fish, or teaching them to fish. Richardson seems to believe the latter. She says the training starts with “religion, apple pie and how-we-have-to-save-America speeches.” And, she says, it includes at least one tough lesson: “You can’t design secure code by accident. You can’t just start designing and think, Oh, I’ll make this secure now. You have to change the ethos of your design and development process. To me, the change has been dramatic and instant.”

To Microsoft customers, it’s a more muted reaction. Since Gates’s proclamation, gaping security holes have been found in Internet Information Server 5.0, reminding the world that legacy code will live on. Even the company’s gaming console, Xbox, was crackedindicating the pervasiveness of the insecure development ethos and how hard it will be to change.

Microsoft also faces an extremely skeptical community of CSOs and other security watchdogs. Don O’Neill, executive vice president for the Center for National Software Studies, says, “When it comes to trustworthy software products, Microsoft has forfeited the right to look us in the face.”

So let’s end where conversations about application security usually begin: Microsoft.

Richardson’s reaction to Gates’s memo was not much different than anyone else’s. “I wondered how much of this was a marketing issue compared with a real consumer issue,” she says.

The memo has become a reference point in the evolution of application securitythe event cited as the start of the current sea change. In truth, the tides were turning for a year or more, and if a date must be given, it would be Sept. 18, 2001, one week after 9/11 and the day that the Nimda virus hit. Microsoft’s entering the frayas it did with the Internet in 1995, also via a memois more an indication that the latecomers have arrived, a sort of cultural quorum call.

It was, “We’re all here so let’s get started,” the beginning of the era of application security as a real discipline, and not an oxymoron.