• United States



The ‘hack back’ is not a defense strategy

Sep 27, 20174 mins
CyberattacksCybercrimeData and Information Security

The urge to strike back against bad actors is getting stronger in the wake of global attacks like Mirai, WannaCry and NotPetya. But while the hack back seems to put power back into victims' hands, it's actually not so simple.

Clash of fists in silhouette
Credit: Thinkstock

There’s no way to put this lightly: As a security ecosystem, we’re in critical times. In fact, we may very well be facing nothing short of a pandemic – and despite all our resources and expertise, we seem to be woefully unprepared. So it’s not surprising that the idea of hacking back is once again gaining some traction.

While the concept of the hack back is one that has been raised over the years, Tom Graves’ proposed Active Cyber Defense Certainty Act is bringing it to the forefront. Born from frustration in the wake of exploits like Mirai, WannaCry and NotPetya, it’s natural that the urge to defend, to strike back, is only getting stronger.

However, hacking back is not a defense strategy.

Hacking’s unintended consequences

On its surface, the hack back is intended to put tools in the hands of victims to identify alleged attackers, halt an alleged attack and potentially recover or delete stolen information. After notifying authorities, victims would be legally allowed to access the alleged attack system and to take action by removing or altering the offending application or user. Seems simple enough. Except it’s not.

Hacking back is a prime example of the law of unintended consequences. First, while a victim may have the legal cover to break into someone else’s system, he or she would have virtually no way of knowing its purpose. Is it a medical device? Is it mission-critical to an organization? Most often, bad actors leverage the systems or IoT devices of unsuspecting consumers and organizations to carry out their misdeeds. The bill – and the premise of hack back, in general – relies on the forensic capability of the victim to determine the source of the perceived damage. Who can foresee what impact his or her actions could have on the seemingly at-fault system?

Think about the Dyn attack: in just 11 hours, more than hundreds of thousands of IoT devices were being used to propagate a volumetric attack capable of bringing down Amazon, Netflix, Twitter and a host of other major internet properties. What good would hacking back have served them? The point is it shouldn’t be left to individual corporations to have the burden of weighing their own self-interests against the unintended consequences of accessing what could be the compromised computer or device of an unknowing victim-accomplice.

When it comes to ‘attacks,’ ambiguity abounds

What’s more, under the act, definitions of what is an attack and what is a compromised computer are ambiguous at best. For example, if a computer on the Internet skims the public IP addresses of my corporate network, can I assume that the computer has been compromised? Does the port or a website scan qualify as an attack? If so, we can imagine competitive interests fueling claims to hack back – and a new set of obstacles for security researchers and white hats whose work can mimic the very actors they seek to disrupt.

Finally, though this proposal may offer legal authority to hack back, there’s no telling where an exploit begins or ends. While such actions may eventually be allowed under U.S. law, back-hackers would still be liable under international law if the compromised/offending system is located overseas, such as the case with WannaCry. Whereas a bad actor is only liable if caught, someone hacking back would be held wholly accountable for his/her actions.

Rather than spin cycles on the understandable, but nonetheless short-sighted reflex to fight hackers with hacking, we need to focus our collective time, energy and resources on fostering meaningful industry collaboration to thwart cyberattacks as they’re starting. Cyber vigilantism won’t evolve our global defenses against the bad guys, but working together will.

Dale Drew is the chief security strategist at CenturyLink. In his role, Dale is responsible for global security product architecture, engineering and operations, and global threat research.

Dale is an accomplished and experienced corporate security executive with more than 30 years of experience in developing critical global security programs, working in federal/state law enforcement and with Internet service providers (ISP). Prior to CenturyLink, Dale served as CSO for Level 3 Communications. Previously, he worked for Qwest Communications and MCI, where he was responsible for Internet security operations and engineering. Dale spearheaded Operation Sundevil, the nation’s largest computer crime investigation, when he served as a member of the U.S. Secret Service. He also ran the Arizona State Forensic Lab, working for the attorney general’s office in the organized crime division.

The opinions expressed in this blog are those of Dale Drew and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.