• United States



by David Rice

Geekonomics Excerpt: The Perversity of Patching

Jan 02, 200812 mins
Application SecurityCSO and CISO

In this excerpt from his new book Geekonomics, David Rice focuses on the security and economic impact of patching commercial software. It’s not a pretty picture.

The problem with testing software is that it is unlike testing automobiles, bridges, or any other physical item. Unlike physical structures like bridges, which can be tested in a straight forward manner for maximum load bearing capacity, each instruction within software must be tested individually. This is a tedious and complex process as prone to error as creating the software itself. For example, if a bridge can support 200 tons, then it can be rightly assumed by the design engineer the bridge can support all weight less than 200 tons. If a bridge can support 300 fully loaded trucks and the bridge is covered in two feet of ice, it is safe to say the bridge can support a person riding a bike on a sunny day. In contrast, software must be tested for each and every potential value. A software engineer cannot extrapolate between test cases as a civil engineer would be able to do for a physical structure. If one series of instructions within a software application works correctly (for the sake of argument, it can “support” 10lbs), this says nothing about the ability of a similar series of instructions to handle 8lbs, 7lbs, or even 9.9lbs. In the software engineer’s world, each test is separate and distinct, unrelated to and independent from all other tests. This means for even a moderately complex application, billions upon billions of tests must be conducted.

At most, software companies spend about 35 percent of their production time debugging and correcting errors in their products. [58] Unfortunately, due to the immense complexity of testing software, many software errors—particularly damaging defects—remain latent and do not become apparent until a much later time; that is, not until the software application has become popular. By then, it is too late.

As a case in point, Microsoft’s Internet Explorer has a long history of vulnerabilities, making it the poster child of “what not to do” from a security perspective when designing and building a web browser. In response to this unsatisfactory performance on the part of Microsoft to improve its web browser’s security, multiple news columnists and individuals within the Information Security community in 2004 encouraged computer users to forgo using Internet Explorer and use a free, much more “secure” alternative for a web browser called Firefox.[59] Outside of a few thousand early adopters, however, Firefox was certainly a promising new web browser but hardly what anyone would call a popular browser at the time. The call-to-arms changed that, however, and thousands upon thousands of people started downloading Firefox. As friends told friends, Firefox steadily became increasingly popular and increasingly more exposed. Within months of the call-to-arms, similar vulnerabilities that critics complained about in Internet Explorer were being discovered uncomfortably often in Firefox.[60] Not only were they being discovered, but the vulnerabilities in Firefox were being actively exploited by hackers, thus placing computer users in the same dangerous position they were in with Internet Explorer. What happened?

The Firefox story highlights two important aspects about software testing and ultimately about market competition. First, attackers are drawn to whatever software application is popular, whether the application is a browser, word processor, operating system, or music player (Apple iTunes is plagued by security vulnerabilities also). Second, because popularity of an application correlates to the amount of attention paid to it by attackers, latent defects will not be discovered until the application has become popular, at which point it is too late. Everyone has already adopted a security defect-ridden application. The first aspect is attackers are attracted to popular applications and pay less attention to less popular applications. This is simple economics. Attackers, like everyone else, have limited time and resources. Attackers will attempt to maximize their efforts by looking for vulnerabilities in increasingly popular applications because the more popular an application, the more potential victims exist, and the greater the return on their investment of time and effort. Thus, an application that appears “more secure” than a popular, rival software application might only appear more secure because of its relative obscurity. This happened with Firefox, as well as with Apple and Google. For instance, many a fan of Apple’s Mac boasted their operating system was “more secure” than Microsoft’s Windows operating system. But this was more a testament of Apple’s abysmal popularity in the desktop market (peak 6 percent market share compared to Microsoft’s 90 percent) rather than the quality and robustness of its software. Since January 2005, Apple released 262 security patches compared to Microsoft’s 157; a growth rate that tracks squarely with Apple’s growing popularity due to iPod and the amount of attention paid by hackers because of it.[61] Even vaunted Google has come under attack as hackers see the popularity of its web-based applications rise.[62] The second aspect of the Firefox story is that software security, or lack thereof, is only considered in any significant way once a software application becomes popular, not during the popularity contest. But by then it is too late. It is like trying to install a crumple zone after the car has been built. In other words, because latent defects remain hidden until the software achieves a certain level of popularity, such defects play no role in the software manufacturer’s competition to become popular. Therefore, thorough testing and security can be easily ignored or left incomplete by the manufacturer until after the application has become popular, given that only then do defects start being discovered more frequently. At this point, though, there are a large number of potential victims to exploit who have already adopted the application and must now directly bear the burden for the manufacturer’s inadequate production practices.

Surprisingly, users of software bear further financial burden for latent defects by purchasing, implementing, and constructing a process around software patching solutions. The irony of a software patching solution is that it is yet another software application that automates the process of fixing the unreliable software purchased in the first place.

Patching is necessary if users want to protect their software systems from internal errors or exploitation from malicious attacks. Even if the patching solution is provided for free, the process of patching is not. Home users experience little upset if the patching solution automatically downloads and installs patches.

But even moderately sized organizations can experience painful expenditures keeping up with patches, patch status, auditing, and validation—all necessary for the patching process because of statutory regulations. A 2004 Yankee Group study found that patching could cost an organization an average of $254 per computer.[63] But patching raises a point that is rather perverse, literally. Even if thoroughly testing software were possible, software companies ultimately have a perverse incentive not to make better software.[64] First, because patching is more expense for the software buyer than for the manufacturer. Second, because upgrades create new revenue streams that would otherwise not materialize, and third, because new licensing terms can be negotiated at will by the manufacturer.

Even if thoroughly testing software were possible, software companies ultimately have a perverse incentive not to make better software.

First, the marginal cost of releasing a patch tracks with the marginal cost of the original software product; that is, while fixing a software defect can be expensive, the marginal cost of releasing a patch is nearly zero. This means the expense related to patching is born almost entirely by software users since it is the user’s responsibility to patch their systems. The larger the number of computer systems the software buyer must patch, the greater the expense for the software buyer, as the Yankee Group study shows. For a moderately sized company, this could be nearly a $1 million per year expense. But the software manufacturer has only the fixed production cost of the patch itself. All copies of the patch are produced at essentially zero marginal cost; therefore, the larger the number of computer systems the software buyer must patch, the expense to the manufacturer stays the same. For instance, if a patch costs a software manufacturer $200,000 to release to the public, the manufacturer’s cost does not increase with the number of systems that must be patched by software buyers. While estimates place the cost to software vendors for creating a patch at 100 times the cost of fixing the error while in development, this has proven an empty motivator simply because the economics still favor not fixing the defect in production. This is where the story of software can get rather more disturbing.

Achieving market dominance often means the dominant software company must compete against itself at some point. There are no other competitors to drive out, so the only applications to compete with are the previous versions of the application already installed by customers. In other words, earlier versions of successful applications inhibit the acceptance of newer versions of the same application. For instance, the largest competitor to Microsoft Office 2007 is Microsoft Office 2003. This poses a problem for Microsoft, or any software manufacturer in a similar position, considering that once a particular market has been saturated, new revenue can only come from upgrades. Failure to sufficiently motivate users to upgrade from an earlier version detrimentally affects short and long term revenue of newer versions for the manufacturer. When a company is competing largely against itself, this means the manufacturer can literally put itself out of business unless it can compel buyers to upgrade to the newer version. As such, upgrades are marketed on the basis of new and improved features that software buyers presumably cannot live without. Ironically, the process of adding more and more features to compel users to upgrade increases the number of latent defects in the applications, resulting in a constant stream of new vulnerabilities as well as a constant stream of patches.

Nonetheless, upgrades do work, most of the time. Some users upgrade easily; some do not. But what can be said with a reasonable amount of confidence is that a majority of users will upgrade to the newest version eventually. From a marketing and revenue perspective, eventually is not sufficient because performance metrics are due (such as profits) to shareholders and the market every fiscal quarter. However, a certain portion of users will remain stridently resistant to upgrades and will forgo newer, better, faster, in favor of familiar and more stable. So these laggards will never upgrade, unless an incentive is applied by the manufacturer.

These holdouts, or laggards, depending on your point of view, are frustrating for software manufacturers. One option for software vendors is to simply refuse to provide patches for earlier versions of a software application. Without a source of patches, users are left exposed to an endless stream of uncertainty:

• Will another defect be discovered that will affect uptime?

• Will an attacker find a weakness that can only be

properly mitigated with a patch?

• Will I be able to communicate and share files with others

using newer versions?

These rudimentary questions and the implied vulnerabilities usually create sufficient pressure to force upgrades on an unwilling population. But the upgrades might not necessarily be needed because new functionality is compelling, but primarily because manufacturers, having achieved a certain level of market dominance, must force users to upgrade or risk going out of business. The third and final aspect of the perverse incentives against thorough testing is the ability to offer new licensing terms with the patch or upgrade on a take-it-or-leave-it basis. For instance, to install an upgrade or apply a patch, software buyers must often agree to a software licensing agreement before the patch or upgrade can be installed. By not agreeing to the license terms, the user will not be able to fix or upgrade their systems, so they must in many respects accept the new licensing terms, or they are betting against themselves.

However, this creates the perverse incentive for software manufacturers to use upgrades and patches as vehicles for “renegotiating” licensing terms on an ad-hoc basis.[65] This means if the law governing aspects of software contracts should change for any reason, or the vendor wishes to change their contract terms for any arbitrary reason whatsoever, the new, updated language can be included by the manufacturer in a subsequent patch or upgrade. Microsoft has been especially notorious for this behavior with its “hopelessly confusing, practically Byzantine Windows licensing structure,” but the same is true for thousands of other software manufacturers.[66]

In the end, failing to sufficiently test software not only provides an avenue to force buyers to upgrade, but also provides anavenue for software vendors to renegotiate licensing agreements for those who do. Given the constant stream of defects any given software vendor produces and given the tendency of users to accept dreadfully biased licensing agreements that favor only the manufacturer, there is no incentive for software manufacturers to forgo a proven method for collecting continued revenue and a dependable mechanism for re-establishing optimal market and legal protection.

It gets worse…


Geekonomics is available for purchase at

This excerpt is printed with permission of Pearson Education from the

book Geekonomics by author David Rice.



58 “The Economic Impacts of Inadequate Infrastructure for Software Testing,” NIST, May 2002.

59 Mossberg, Walter, “How to Protect Yourself From Vandals, Viruses If You Use Naraine, Ryan, “Zero-day Firefox Exploit Sends Mozilla Scrambing,” EWeek, May 9, 2005. Achohido, Byron, “Cybercrooks constantly find new ways into PCs,” USAToday August 2006.

Windows,” The Wall Street Journal, September 16, 2004.



62 Google: Security Mishaps and User Trust, Michael Arrington, October 18, 2006. McAlearney, Shawna, “Yankee says Patching Costs Companies Millions,” Barnes, Douglas, “Deworming the Internet,” Texas Law Review. The following paragraphs are a summarization of Douglas’ exceptional paper on issues precipitating failure in the software market. Barnes, Douglas, “Deworming the Internet,” Texas Law Review, 83 Tex. L. Rev. 279. Bott, Ed, “Microsoft’s Licensing Mess,” June 7, 2007