• United States



Senior Writer

Another massive DDoS internet blackout could be coming your way

News Analysis
Feb 28, 20186 mins
CybercrimeHackingNetwork Security

Check your DNS, people. And please, make it redundant.

ddos attack
Credit: Thinkstock

A massive internet blackout similar to the Dyn DNS outage in 2016 could easily happen again, despite relatively low-cost countermeasures, according to a new study out of Harvard University.

The DDoS attack on Dyn took many major web sites offline for most of a day, including Twitter, PayPal, Reddit, Amazon, and Netflix. Millions of compromised IoT devices, belonging to the Mirai botnet, flooded Dyn’s DNS service with up to 1.2 TBps of bogus traffic, making it impossible to respond to genuine DNS requests for their customers’ web sites.

The Dyn attack did not affect the PayPal or Twitter servers in any way, but these sites were unreachable for the vast majority of humans who prefer not to memorize IP addresses when sending money to scammers or shitposting on social media.

The attackers were not nation-state actors but rather garden-variety criminals with an axe to grind. “The perpetrators were most likely hackers mad at Dyn for helping Brian Krebs identify–and the FBI arrest–two Israeli hackers who were running a DDoS-for-hire ring,” Bruce Schneier wrote at the time.

The growing legion of insecure IoT devices–insecure out of the box, and often unpatchable–means that the next DDoS attack on the domain name system could be much more severe. The centralization of DNS providers is largely to blame.

When single points of failure fail

DNS was designed to be distributed, but the growing centralization of DNS creates single points of failure, the authors note. “The attack’s devastating success highlights many of the ways in which a concentrated DNS space with relatively little provider diversification on the part of domain administrators can leave even large firms vulnerable to service disruptions.”

How did we get here, you may ask? Turns out our decade-long love affair with other people’s computers–I mean, the cloud–has resulted in a concentration of internet infrastructure that the designers of DNS never anticipated.

In ye olden days, companies managed their own DNS in house. That required humans managing computers in an office who could otherwise be building the next great thing. You know, like Uber.

While older, more established companies are still more likely to host their own DNS, the emergence of cloud as infrastructure means that newer companies are outsourcing everything to the cloud, including DNS.

“The concentration of DNS services into a small number of hands…exposes single points of failure that weren’t present under the more distributed DNS paradigm of yesteryear (one in which enterprises most often hosted their own DNS servers onsite),” John Bowers, one of the report’s co-authors, tells CSO. “The Dyn attack offers a perfect illustration of this concentration of risk–a single DDoS attack brought down a significant fraction of the internet by targeting a provider used by dozens of high profile websites and CDNs [content delivery networks].”

The shocking part of this report is that despite the clear danger this concentration poses, too few enterprises have bothered to implement any secondary DNS.

Those who fail to learn from history are doomed to repeat it

The Dyn attack got a lot of media coverage, including right here at CSO. Cassandras preached about the need to diversify DNS, but few in the audience bothered to listen, the numbers show. “It seems that the lessons of the Dyn attack were learned primarily by those who suffered from them directly,” the report notes.

Before the 2016 attack, more than 90 percent of the domains studied used name servers from just one provider. Following the attack, that percentage dropped from 92.2 percent to 87.3 percent six months later, in May 2017. Most of those were Dyn customers who experienced the outage.

Even Dyn themselves, now owned by Oracle, offers a secondary DNS service and encourages their customers to use it. In a brief prepared statement, Dyn’s director of architecture Andrew Sullivan told CSO that “website operators need diversity all through their stack, and to select components like DNS services, web firewalls, and DDoS protection that support diversity.”

One difficulty of diversifying external DNS providers, the report notes, is that external DNS is often bundled with other services, like a CDN and DDoS protection. CloudFlare has more than 15 percent market share as DNS provider for the domains studied, yet the company’s DDoS protection service, the report notes, “make it impossible for domains to register DNS name servers managed by other providers.”

The report notes a trend among new domains to use cloud-based platforms that include DNS as one of a suite of service offerings. Amazon AWS can withstand any DDoS attack, you might think, but remember that time a typo by an Amazon employee brought down S3? Both accidents and adversaries threaten single points of failure.

You wouldn’t build a bridge without redundancy, why would you build your DNS infrastructure without redundancy?

How to make your DNS redundant

The first thing you should do is figure out what your current setup is, if you don’t already know. Check your name servers:

    dig +ns

“If the names that come back are in your own domain, that means you’re doing it yourself,” Andy Ellis, CSO of CDN provider Akamai, tells CSO. “You should consider whether that’s the right call, for most companies it isn’t. If you already have a CDN provider, there is a good chance DNS service is available either with your existing contract or as an add on; that’s a fast way to add, or switch, a provider.”

While low traffic sites typically list only two name servers, DNS permits up to eight. Use them all, Ellis advises, in a 6:2 configuration. Organizations wanting additional redundancy can self-host in a 5:2:1 configuration.

What’s striking about this problem is that it is hardly new. RFC 2182 laid down the law on secondary DNS best practices in 1997, the report notes. “A major reason for having multiple servers for each zone,” RFC 2182 tells us, “is to allow information from the zone to be available widely and reliably to clients throughout the internet, that is, throughout the world, even when one server is unavailable or unreachable.”

While some of the RFC suggestions are now out of date–swapping secondary zones with another organization now seems a bit antiquated–the fundamental principles of avoiding central points of failure and ensuring redundancy haven’t changed. “Provider redundancy both gives you scale, and ensures that issues with one provider don’t take your business offline,” Ellis says.

Diversify, diversify, diversify

Central points of failure on the internet are a big no-no, especially when any idiot renting a botnet can take major websites offline for the better part of a day. Mitigating that risk by diversifying your DNS smells a lot like due diligence these days.

“It is not that difficult to do, and it does not cost much, and it is good practice,” Shane Greenstein, professor at Harvard Business School, says. “To be sure, it is a hassle for a very big company, but that is no excuse. All cyber security is a hassle, and this one is pretty minor in comparison to other preventative actions.”

Senior Writer

J.M. Porup got his start in security working as a Linux sysadmin in 2002. Since then he's covered national security and information security for a variety of publications, and now calls CSO Online home. He previously reported from Colombia for four years, where he wrote travel guidebooks to Latin America, and speaks Spanish fluently with a hilarious gringo-Colombian accent. He holds a Masters degree in Information and Cybersecurity (MICS) from UC Berkeley.

More from this author