Dealing with WannaCry on Monday morning, and the days ahead

Friday's nightmare went better than expected, but this was just the beginning

screen shot 2017 05 13 at 11.28.04 am
MalwareTech

It's Monday. Across the globe organizations are likely having the same conversation: What happened? What is WannaCrypt (WannaCry)? Are we exposed? What can we do? If you're in the trenches, here's a brief outline that might help you manage some of the conversations you're going to have this week.

Friday was hell, and it exposed the nasty truth about IT: When the basics are ignored due to oversight or because the organization is forbidden from taking action, things that should be easily prevented or mitigated can kill a network.

So, what happened?

Well, as most of the world knows, a variant of WannaCrypt Ransomware (WannaCry) started spreading across the globe Friday. It targeted a vulnerability in the SMB protocol, and leveraged an exploit stolen from the NSA (ETERNALBLUE) to do so. At the time this post was written, nearly 200,000 systems had pinged the sinkhole domain tracking WannaCry.

Variants were observed over the weekend, but they were either using the same kill switch domain, or a different one that was easily identified and purchased so the malware wouldn't spread.

Organizations of all sizes were victimized by the Ransomware. The one victim everyone is talking about is the National Health Service (NHS) in the United Kingdom, particularly the providers in England. Friday's attack was devastating to the NHS, and likely had a real physical impact to patients seeking care.

Making things worse, in addition to leveraging an exploit stolen form the NSA, Friday's attacks also included the installation of another stolen NSA tool – Double Pulsar – which leaves infected systems encrypted and exposed to remote attacks.

Note: The backdoor goes away with a reboot, and patching with MS17-010 fixes the issue that allows it to be installed.

Infected systems demand a payment of $300, but researchers say payments won't help, as there is no way to actually track who has paid the ransom, and the decryption process requires personal interaction with the attackers. It's best to assume the files are lost.

Those responsible for the attack – at the time this post was written – have collected $38,747.87 in payments since Friday.

If you'd like a technical overview of WannaCry, Amanda Rousseau (Malware Unicorn) has published a solid write-up on the Endgame corporate blog.

Patches and blame…

There was a good deal of blame going around over the weekend, and most of it centered on the patches released by Microsoft in March to address the NSA exploits targeting SMB.

However, the updates in March were for Windows Vista, Windows 7, Windows 8.1, and Windows 10, along with Windows Server 2008-2016. It wasn't until late in the day on Friday that Microsoft released patches for Windows XP and Server 2003.

Patching isn't a silver bullet...

Many of the organizations hit, including the NHS, are using legacy systems that simply cannot be patched the moment something is released - if they can be patched at all.

Sources in the retail sector told Salted Hash on Friday that Q4 of any given year (usually around the holiday shopping season) means that patches and system changes are put on hold. The systems are essentially frozen in their current configs. Others explained that in some cases vendors control the hardware and software, as well as the patching, and refuse to let the organization touch it.

In the medical sector, an IT staffer explained during a brief phone conversation that his team isn't allowed to install patches or additional software, as doing so often requires various checks and change approvals, as well as certification. There is also the consideration of support contracts, where the hospital isn't allowed to alter a systems software, which includes patching.

As for the legacy systems in the medical world, dealing with them isn't a simple upgrade or replace. And that's not because the organization is cheap, but because when you purchase expensive medical equipment, the investment is measured in decades, not years. There is also the issue of compatibility to consider.

Remember the basics:

Patching is still the first thing that comes to mind when discussing the basics. But as mentioned, there are plenty of reasons organizations chose not to patch. Still, one of the major issues exposed on Friday was the patch gap.

Organizations had two months to patch, why is there such a large delay? Is there a way to shorten it? If patching wasn't an option, what about compensating controls, where were they? Consider the situation from your own organization's point of view. If you're lucky, you're not going to be affected by WannaCry, but what about the next attack?

Another issue exposed on Friday deals with segmentation, and in some ways, the problem with unchecked services running. WannaCry moved quickly across flat networks, so segmentation would have been useful in hindsight. But why was SMB left exposed unchecked?

"…blocking all versions of SMB at the network boundary by blocking TCP port 445 with related protocols on UDP ports 137-138 and TCP port 139, for all boundary devices," explains a recent US-CERT notice concerning SMB best practices.

This advice was circulated far and wide over the weekend, and is worth repeating. If you need assistance with disabling SMB (especially SMBv1), Microsoft has documentation.

For some further thoughts, solid outline of additional basics is available here, including asset tracking and regular backups, which are tested.

The (mostly) Good News:

Most anti-virus vendors and endpoint protection vendors will detect Friday's variant of WannaCrypt, so new infections should be easier to flag. However, this doesn't mean the worst is over. Additional attacks are expected, which is why segmentation, patching, disabling SMBv1, and backups are so vital.

WannaCry was slowed on Friday because a researcher discovered the kill switch used by the Ransomware and registered it. If the kill switch domain responds, WannaCry doesn't spread. The second variant on Sunday was stopped the same way.

(Friday) Kill Switch: iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com

(Sunday) Kill Switch: ifferfsodp9ifjaposdfjhgosurijfaewrwergwea.com

Internally, administrators should add the kill switch domain to their DNS server so it always resolves, this will prevent possible issues with proxies preventing the lookup.

If an employee was infected outside of the office, and they connect their system to a network vulnerable to attack, things will get ugly. Remember, payment won't recover the encrypted files. If backups aren't an option, you'll have to image the system.

Early on, a fact sheet was compiled on GitHub, it contains a wealth of technical information.

New! Download the State of Cybercrime 2017 report