Ransomware has been around since 2013, but it was the success of CryptoLocker that spawned a booming vertical market for criminals.
The effect of Ransomware has been felt by organizations both large and small; each of them well aware of the risks associated with this type of malware. Some even had, what they assumed, were solid defenses against this type of attack - but their assumptions were wrong.
Most Ransomware victims have a shared connection – they lacked some essential security basics, and that's what this article will address.
Daniel Tharp, a government IT manager in New Mexico, recently published a blog post on Ransomware that's worth further examination.
In it, he addresses the topic of Ransomware as something that's here to stay and hammers home some essential practices that administrators can use to help defend their networks and users from the threat.
"The trouble with ransomware right now is that it behaves like a standard application. It doesn't require local administrator privileges, it doesn't care if UAC is on, and most of them make use of the standard Windows API for encryption, which you can't disable without really messing up a workstation. So if we can't control the behaviors, we have to make do for controlling the vectors," Tharp said in an interview with Salted Hash.
For example, there's a great Office ADMX template for disabling macros. The template kills the non-executable variants of Ransomware that are starting to gain in popularity among criminals. One of the reason such variants exist is because they load directly into RAM and bypass most restriction policies.
Tharp's post lists a number of other protective steps; we've reproduced a few of them below.
- Avoid mapping your drives and hide your network shares. WNetOpenEnum() will not enumerate hidden shares. This is as simple as appending a $ to your share name.
- Work from the principle of least permission. Very few organizations need a share whereby the Everyone group has Full Control. Delegate write access only where it’s needed, don’t allow them to change ownership of files unless it’s a must.
- Be vigilant and aggressive in blocking file extensions via email. If you’re not blocking .js, .wsf, or scanning the contents of .zip files, you’re not done. Consider screening ZIP files outright. Consider if you can abolish .doc and .rtf in favor of .docx which cannot contain macros.
- Install the old CryptoLocker Software Restriction Policies which will block some rootkit-based malware from working effectively. You can create a similar rule for %LocalAppData%\*.exe and %LocalAppData%\*\*.exe as well. It was pointed out in the Reddit comments, that if it’s at all feasible, run on a whitelist approach instead of a blacklist. It’s more time-intensive but much safer.
- Backups. Having good, working, versionable, cold-store, tested backups makes this whole thing a minor irritation rather than a catastrophe. Even Windows Server Backup on a Wal-Mart External USB drive is better than nothing. Crashplan does unlimited versioned backups with unlimited retention at a flat rate, and there’s a Linux agent as well. Hell, Dropbox does versioned backups. Get something.
"I didn't make mention of it at all in the article, but some firewalls have the ability to block connections to known botnet servers," Tharp explained,
"If that's not available, you can use DNS sinkholing to block connections to known bad domains. SANS released a tool to that end for Windows Server DNS and documentation for it here. This isn't enough on its own but answering this issue needs a multi-layered approach."
He offered another tip for organizations that manage their shares with File Server Resource Manager. Those that do can set file screens.
"You might want to add a screen like *decrypt*, one for *.locky, and look at the common names given for the decryption help instructions (e.g., help_your_files.txt for CryptoWall). FSRM can take action if a screened file is attempted to be written, which includes firing arbitrary commands. You could kill your LanManServer service, for example," Tharp said.
It's possible that after seeing Tharp's list, some administrators will consider the information old news - and if so – they're not wrong.
But consider this, if these protections are dated – why is Ransomware still so effective? The gut reaction is to blame the user, and that's not wrong either. However, sometimes the user is always going to be a problem – the trick is to expect an end user will eventually make a mistake and look for ways to limit exposure regardless of what they're doing.
Tharp says he was taken to task by fellow administrators because some of the things he suggested were outdated, particularly the blacklist-based Software Restriction Policy.
"In my defense, that was one point out of seven, but people have really pushed me to point out that a whitelist-based solution is better than a blacklist-based one. I don't disagree at all, but if you're an MSP with 150 clients that's a lot of R&D time to be billed," he said.
"If you're managing one infrastructure you should certainly spend the time to work on an application whitelist. AppLocker is available in Enterprise versions of Windows and has some huge timesaving features, like the ability to allow certain signed publishers across the board. If you don't have AppLocker, working with Software Restriction Policies on a whitelist basis will also do what you need but with a bit more work."
The point is that while some of these methods might seem old, they're still needed. They're the basics that most organizations are missing.
Rather than using a layered approach, organizations rely on a mix of endpoint signature-based protections and awareness training. Teaching users is good, but it isn't a foolproof method of defense.
"My last thought is that if the end-user is put in a position where they're my last line of defense to not open that attachment, to not click that ad, then I have failed them. Not to say that training is useless; we conduct security awareness training and are rolling out phishing testing, but the responsibility ultimately falls on my team to prevent them from ever being put in that position in the first place," Tharp said.
"It's a team effort, but don't mistake it for being a 50/50 split of duties, it's something closer to 97/3. So, do everything you can to close the vectors of infection, and have those well-trained users represent your plan F, G, or H in mitigating this threat. Plans A through E are all on you."
Ransomware infections are being reported consistently in the media these days. Anti-Virus can't stop these types of infections, because the vendors have a hard time keeping up with the latest variants. Adding fuel to the fire, because the latest generation of Ransomware payloads are smaller scale and more focused, IDS/IPS protections do little to prohibit their spread as well.
So the key is to use a layered approach like the one Tharp outlined. However, it's the existence of (current) tested backups, paired with a solid BC/DR plan that's going to make a world of difference in most cases.
As part of the interview, Salted Hash asked Tharp to share some Ransomware-based war stories, as they almost always make for a good lesson. His deliver as expected:
"I did see it put a company out of business, we were called for the first time after the damage was done. Their antivirus didn't catch the Ransomware until it had finished encryption, and when it sprang into action, it not only deleted the virus but also the registry keys the virus created that contained the data on how to decrypt when payment was received.
"You know the story, [the company] never tested their backups [and discovered that] backups hadn't run in five years. We had the AV vendor on the phone seeing if there was any way to un-quarantine the registry keys, no solution could be found.
"On the other hand, an organization where users knew that their workstations were treated like disposable goods and put everything on the server, was hit. The file server did backups twice daily just with standard Windows Server Backup going to a $50 external hard drive.
"That was all it took to have them operational again in hours. It doesn't have to be a gigantic expense to work from a reactive-only standpoint. Add on a cloud-backup solution that supports versioning and you at least don't have to worry about how you're going to figure out who you were supposed to bill for that order."