Americas

  • United States

Asia

Oceania

roger_grimes
Columnist

Attention, ‘red team’ hackers: Stay on target

Analysis
Dec 08, 20154 mins
Data and Information SecurityHackingNetwork Security

You hire elite hackers to break your defenses and expose vulnerabilities -- not to be distracted by the pursuit of obscure flaws

The most fun I’ve had as a security guy was getting paid to penetration-test companies and websites. It’s like getting paid to be a gamer. You earn a fat paycheck to hang out with friends and hack away without fear of being arrested.

Most large companies today have multiple teams of professional pen testers, often both internal and external, trying to hack their systems. The most elite of these are called “red teams,” a military phrase used to describe any friendly group directed to think and plan like an independent enemy. The idea is that independent thinkers might find holes in your defenses that other people inside might not find.

Through the years I’ve been part of some great red teams — and heard countless stories of how red teams not only broke in, but did so discretely, without setting off any alarms. Personally, every red team I’ve ever been a member of over the last 20 years has taken no more than three hours to break in without social engineering. If social engineering was allowed, it usually took less than an hour.

Each person on a red team usually has his or her favorite go-to techniques. Some members attack at the network layer, others attack only websites or databases, others use particular tools or languages. A good red team can sniff out hidden weaknesses — and share them with the defenders to prepare against real-world attackers.

But a good red team should mimic what adversaries are likely to do. In many cases, red teams have drifted away from trying what real-life adversaries would likely attempt — and focus instead on techniques that you’d be unlikely to see. These techniques often end up taking over the whole system or network. They’re often unrealistic and overly “sexy.”

Unfortunately, management usually hears that the crown jewels of the system have been compromised and starts mandating that defenders start closing holes — without considering whether those holes would be likely to be exploited in the real world.

This happens in military exercises as well. One of my favorite red-team analysis papers was written by U.S. Army Major David F. Longbine in his paper “Red Teaming: Past and Present.” Major Longbine examines examples of both good and bad red-teaming. He sums it all up in this telling paragraph:

These errors consist of over-reliance on technology … failing to adapt to battlefield developments, misreading the adversary … Most of these “errors” are essentially failures or misguided attempts to apply the red teaming core concepts of incorporating alternative analysis and alternative perspective into decision-making. A fundamental precondition of the red team core concepts is that they must exhibit some degree of realism and accuracy.

I’m seeing more and more red teams where success is measured solely by their ability to break in — without an assessment of whether the hack successfully mimics techniques hackers would be likely to employ. Sure, any hole that allows access to critical assets should be addressed. But resources should be concentrated on closing likely vulnerabilities, not obscure ones. Security is all about assessing risk — and good defense always focuses on the most imminent threats.

The greatest value a red team can deliver is to break into your environment while mimicking real-world attackers. That can show you the continued gaps and weaknesses to your defenses. There’s particular value if red teams break in through holes you thought were closely defended, especially if the intrusions gain success without the defending team receiving alerts and notifications.

Admittedly, I’m the pot calling the kettle black. When I think back to my days red-teaming, I didn’t care how I broke in, only that I broke in. Heck, the more obscure the method I used, the more I liked it, and the more the customer seemed threatened.

There’s a common saying in the computer world: “Security by obscurity is no security!” I think that’s overapplied; attacking by obscure methods produces less value than it could. The best red teams look at the field of battle and mimic the most likely adversaries and techniques. Managers and defenders should require it.

roger_grimes
Columnist

Roger A. Grimes is a contributing editor. Roger holds more than 40 computer certifications and has authored ten books on computer security. He has been fighting malware and malicious hackers since 1987, beginning with disassembling early DOS viruses. He specializes in protecting host computers from hackers and malware, and consults to companies from the Fortune 100 to small businesses. A frequent industry speaker and educator, Roger currently works for KnowBe4 as the Data-Driven Defense Evangelist and is the author of Cryptography Apocalypse.

More from this author