5 reasons why software bugs still plague us

With the time and expense spent locking down code, most popular programs should be bulletproof -- yet hackers find a way

Another month, another few dozen patches to install -- it's never-ending. It's frustrating.

Software coding tools supposedly have security built in by default. We have "safe" programming languages. We have programmers using SDL (security development lifecycle) coding tools and techniques. We have operating systems with more secure defaults and vendors that fuzz and attack their own software with a vengeance to find holes. We have companies spending billions of dollars to eliminate software bugs.

[ Also on InfoWorld: 5 lessons from companies that get computer security right | Find out how to block the viruses, worms, and other malware that threaten your business, with hands-on advice from InfoWorld's expert contributors. Download the PDF today! | Learn how to secure your systems with InfoWorld's Security Central newsletter. ]

Why still so many? Why can't fuzzers and code testers find them all?

Here are five reasons why software is still full of bugs, despite so many well-meaning attempts to eradicate them:

1. Human nature

Most -- though not all -- coding bugs originate from human error. Some can be attributed to unexpected or weird outcomes due to a software coding tool or compiler. But the majority results from mistakes made by a human programmer.

No matter how great the SDL training or the security tools we receive, we are still human and we make mistakes. If you want to know why we still have computer software vulnerabilities, it's because humans are fallible.

That said, we're not doing enough to reduce human error. Many programmers simply aren't given sufficient (or any) SDL training, nor do they have incentives to program securely. I'm always surprised by how many programmers who write security software for a living don't understand programming securely. You can bet the bank that most security software you run has as many bugs, if not more, than the software it is supposedly protecting.

But even highly trained coders who try their best miss bugs. For instance, long ago, a bad guy created a buffer overflowing in a browser using an HTML tag field that determined color. Instead of entering FFFFFh or something like that, the hacker could enter executable code into the color field, which the browser would consume and cause a buffer overflow. VoilĂ ! Exploit. Few could have anticipated that one.

2. Increasing software complexity

By its nature, software keeps getting more complex, which of course means more lines of code. With programming, no matter how good you are, there will be a certain number of bugs and mistakes (though not always exploitable) per lines of code. People who count such items say that if you only have one mistake per every 50 lines of code, you're doing pretty well. Most programmers veer closer to a mistake for every five to 15 lines of code. Consider, say, that the Linux kernel has more than 15 million of lines of code ... you do the math.

Even without coding errors, programmers can't anticipate an application's overall interactions in the Internet age. Most programs must talk to other APIs, save and retrieve files, and work across a multitude of devices. All those variables increase the chances of a successful exploit.

1 2 Page 1
Page 1 of 2
7 hot cybersecurity trends (and 2 going cold)