Another month, another few dozen patches to install -- it's never-ending. It's frustrating.Software coding tools supposedly have security built in by default. We have "safe" programming languages. We have programmers using SDL (security development lifecycle) coding tools and techniques. We have operating systems with more secure defaults and vendors that fuzz and attack their own software with a vengeance to find holes. We have companies spending billions of dollars to eliminate software bugs.[ Also on InfoWorld: 5 lessons from companies that get computer security right | Find out how to block the viruses, worms, and other malware that threaten your business, with hands-on advice from InfoWorld's expert contributors. Download the PDF today! | Learn how to secure your systems with InfoWorld's Security Central newsletter. ]Why still so many? Why can't fuzzers and code testers find them all?Here are five reasons why software is still full of bugs, despite so many well-meaning attempts to eradicate them: 1. Human natureMost -- though not all -- coding bugs originate from human error. Some can be attributed to unexpected or weird outcomes due to a software coding tool or compiler. But the majority results from mistakes made by a human programmer.No matter how great the SDL training or the security tools we receive, we are still human and we make mistakes. If you want to know why we still have computer software vulnerabilities, it's because humans are fallible.That said, we're not doing enough to reduce human error. Many programmers simply aren't given sufficient (or any) SDL training, nor do they have incentives to program securely. I'm always surprised by how many programmers who write security software for a living don't understand programming securely. You can bet the bank that most security software you run has as many bugs, if not more, than the software it is supposedly protecting.But even highly trained coders who try their best miss bugs. For instance, long ago, a bad guy created a buffer overflowing in a browser using an HTML tag field that determined color. Instead of entering FFFFFh or something like that, the hacker could enter executable code into the color field, which the browser would consume and cause a buffer overflow. Voil\u00e0! Exploit. Few could have anticipated that one. 2. Increasing software complexityBy its nature, software keeps getting more complex, which of course means more lines of code. With programming, no matter how good you are, there will be a certain number of bugs and mistakes (though not always exploitable) per lines of code. People who count such items say that if you only have one mistake per every 50 lines of code, you're doing pretty well. Most programmers veer closer to a mistake for every five to 15 lines of code. Consider, say, that the Linux kernel has more than 15 million of lines of code ... you do the math.Even without coding errors, programmers can't anticipate an application's overall interactions in the Internet age. Most programs must talk to other APIs, save and retrieve files, and work across a multitude of devices. All those variables increase the chances of a successful exploit.The good guys are always at a disadvantage. It takes much more code to defend against bad actors than it does to write bad actors. I can write a malware program that can brick your computer with 30 assembly language instructions. It would probably take you at least 50,000 assembly language instructions to defend against the same. 3. Fuzzers are people, tooThese days, fuzzers are used to tease out software vulnerabilities. Fuzzers -- or any programs that look for coding mistakes and vulnerabilities -- are written by people. Fuzzers didn't find that color attribute buffer overflow because they weren't written to look in that field. After the success of the exploit, the fuzzers were updated, and they now look in all sorts of fields for similar buffer overflow conditions. Fuzzers only do what we tell them to do. 4. Lack of vendor accountabilityMany security experts complain that we'll never be more secure as long as we can't sue companies for software flaws. I agree that more company accountability would help decrease security risk, but increased legal liability would probably slow down progress. You would not be holding that cool little cellphone, have that near-weightless music player, or watching movies over the Internet if we could hold software companies more accountable than they already are.Success is driven by features and speed, not security. We as a society have determined that we will trade safety and security for newness. That's not necessarily a bad thing -- we get ahead faster. But we have to live with the downsides of that trade-off. So far, we are willing to accept a lot of risk to get the cool new thing. 5. Lack of hacker accountabilityThe reality is that none of the above will get fixed anytime soon. But the software vulnerability itself isn't really the problem. It's the exploit of the vulnerability by those with malicious intent. As long as we let most hackers get away with murder, rampant hacking and malware will continue to plague us.I still hold out hope that one day the Internet will be fixed, default pervasive identity will get baked in, and we can hold those who do us harm more accountable, as in the real world. Until that happens, we'll keep playing whack-a-mole defense and be barraged by constant software patches.This story, "5 reasons why software bugs still plague us," was originally published at InfoWorld.com. Keep up on the latest developments in network security and read more of Roger Grimes' Security Adviser blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.