Confessions of a security pro: I was wrong about host hardening

After years of preaching host hardening, this security expert realizes the practice isn't always beneficial -- and can be harmful

In four of my eight books on Windows security, I've preached host hardening, explaining to readers how to fine-tune their computers beyond the defaults to decrease the attack surface. Probably a quarter of the articles I wrote in the last decade, out of hundreds, were about host hardening. In the past two weeks since the latest IPv6 exploit was published, I received more questions about the practice. Who could argue with the "least privilege" dogma that underpins most of computer security?

Well, after 20-plus years of giving hardening advice, I realized I was wrong. A few factors have changed -- for example, nearly every OS vendor now has very reasonable and relatively secure defaults. But there's more, including a startling personal realization: In general, there is very little evidence to support the case that a company tightening Windows beyond Microsoft's (my full-time employer) recommendations experiences any significant benefit.

[ Master your security with InfoWorld's interactive Security iGuide. | Stay up to date on the latest security developments with InfoWorld's Security Central newsletter. | Get a dose of daily computer security news by following Roger Grimes on Twitter. ]

Yes, leaving unneeded services turned on may increase the possible attack surface, but good security is all about risk management and cost/benefit trade-off. Why disable a service or tighten a permission if it isn't attacked? Why expend the energy and increase your operational risk? As it stands, there are greater risks to worry about.

I'm in the field 90 percent of the time helping clients fight off hackers, and all the attacks I see stem from client-side, social engineered Trojans or application data malformation. I've never seen (in real life) an attack made possible because an organization did not harden its defenses beyond the vendor's defaults or recommendations. It's always because the organization accidentally weakened some default setting it shouldn't have, ran socially engineered Trojans, or didn't follow advice that everyone has been promoting for 10 years, such as good patching, strong passwords, and so on.

Many security practitioners want to disable unneeded services to decrease the risk of remote buffer overflows and the like. But since March 2003, there have only been a handful of truly remote buffer overflows in default Microsoft services. Most of the buffer overflows you read about are only considered "remotely" exploitable in that gaining access to inside resource from outside the network requires tricking an end-user into clicking on something.

Most remote buffer overflows, especially the biggest ones, affected services that everyone was either required to run (such as RPC) or had to run because of needed functionality -- that is, Web server, SQL, and so on. The three most successful attacks in the history of Microsoft Windows -- Blaster on RPC, Code Red on IIS, and the SQL Slammer worm -- demonstrate this. Note that these major exploits happened a long time ago, and in all three cases, vendor patches were available, sometimes for months, before the remote exploit hit.

1 2 Page 1
Cybersecurity market research: Top 15 statistics for 2017