• United States



David Braue
Editor at Large

Anatomy of an Australian ransomware response

Feb 20, 20227 mins

Compromise was six months in the making and took three weeks to fix.

cso security hack breach identity infiltrate gettyimages 653137674 by solarseven
Credit: Solarseven / Getty Images

It started like any other day, but within hours the IT team was scrambling as the business ground to a halt—victim of a ransomware cyberattack that popped up a ransomware notice, locked the company’s servers, and triggered a chain of system shutdowns that brought the entire business to a halt.

Executives moved quickly, engaging the cybersecurity remediation team at Accenture to help understand and resolve the problem—but it was only the beginning of a major cleanup effort that would see a joint project team of about 30 people working 24/7 for three weeks straight.

“It was an intensely cumbersome process to go through,” recalls Mark Sayer, AAPAC lead for cyberdefence at Accenture, who directed a broad technological response during which security analysts uncovered an extensive cybercriminal operation that had been preparing to strike for six months.

“We were working 17-hour days,” he recalls, “and I would literally get off the phone, go to sleep, wake up, and get back on the phone. We did that for three weeks without a break, and no weekends.”

While there was strong and continuous support from executives at the victim company—an Australian firm with 5,000 employees that Sayer describes only as ‘Purple Ocean’ — the process of figuring out what had happened, and how to fix it, was a learning experience for a company that was generally focused on keeping the lights on.

“Dealing with ransomware attacks is not something our clients do every day,” Sayer says, “and most of the security team has never dealt with it. It is a relative unknown, and none of it is straightforward. We came into this running blind without any idea what was going on—but you just get in there and discover this stuff.”

What the team discovered was eye-opening—and a reminder that even when you do everything right, cybercriminals are often still one step ahead of you.

Cybercriminals pounce on a Citrix vulnerability before a fix arrives

The 2020 breach, it turns out, had started months earlier, when the Citrix Application Delivery Controller (ADC) NetScaler vulnerability (CVE-2019-19781) was disclosed in late 2019 and rapidly exploited by cybercriminals who saw its value for remote code execution.

It took around two weeks before Citrix issued a formal patch for the vulnerability, during which time victims were left defenceless as cybercriminals methodically scanned the internet, exploiting the vulnerability to install remote-access malware on every server they could access, and moving on to the next victim.

Once the patch was released in January 2020, the IT team at ‘Purple Ocean’ dutifully applied it across their servers and networked devices within a few days—not realising that those systems had already had the remote-access malware installed days earlier.

“We saw the first activity of criminals scanning the entire internet, looking for this particular vulnerability, on January 6,” Sayer recalls, “but we didn’t know at the time that every time they found a vulnerable NetScaler, they would inject the back door. Even though the ‘Purple Ocean’ team did a really good applying the patch when it came out, they didn’t realise the cybercriminals had already put that back door in.”

Those back doors sat dormant for six months, leaving the company to go about its business until cybercriminals returned to figure out what they could uncover on its network.

After gaining remote access to the compromised device, the cybercriminals began capturing usernames and passwords of remote-working users who were working from home due to the COVID-19 pandemic.

By leveraging those credentials, cybercriminals could open up a remote desktop gateway and log into the company network, exploring its layout and data stores using conventional tools that would fly under the radar of any installed security defences—an approach known as ‘living off the land’.

Once they had mapped the network for potential targets, the cybercriminals visited a web-based file sharing site to download a .ZIP file containing a range of hacking tools. It was part of a standard compromise playbook, Sayer says, that included installation of the Mimikatz credential-theft tool to scrape the system’s memory for as many user credentials, including plaintext passwords, as possible.

“If you look around your own organisation and you ever see Mimikatz running,” Sayer says, “you’ve got a problem.”

Once the attackers found the domain administrator’s credentials and stole the Kerberos ‘golden ticket’ used by the operating system to authenticate itself, he says, “it gives them pretty much the power to do whatever they want. Once the attackers get to the point where they’ve got this level of access, it’s pretty much game over for you.”

What the attackers wanted, it turns out, was to plant even more back doors to ensure they could continue returning to the victim’s environment with impunity. The Cobalt Strike pen-testing tool was implanted on target servers, Sayer says, “so that later on, they could come back after we’d done the work” to clean up the attack.

“Threat actors do these things,” Sayer says. “If you don’t pay the ransom, you have to do the hard work to recover your data—then they come back to you with more problems and put more pressure on you. And if you’ve spent weeks doing 24/7 recovery work, and they come back and re-encrypt everything, you’re not about to start the process again—you’ve just got to pay the money.”

Fighting cybercriminal attacks during the remediation effort

Having figured out how the cybercriminals managed to breach ‘Purple Ocean’’s systems and infect them with ransomware, the Accenture team went on the offensive—isolating infected systems, methodically recovering data and backups, and closing security loopholes that the cybercriminals repeatedly tried to reuse to interfere with the remediation work.

Because the ransomware had also compromised the victim company’s backup servers, the 100-strong recovery team also had to work through those servers to restore whatever data could be found and salvaged—a process that often runs faster, Sayer says, when companies have their data in the cloud because the storage system is no longer a bottleneck.

“While we were trying to recover all the application systems, we were also understanding how the attackers broke into the environment, where were the back doors, and then formulating a plan around how we could kick them out,” he recalls.

Although company executives “did a brilliant, brilliant job” engaging outside experts and providing necessary support, the complexity and scope of the attack forced Sayer to work with those executives—who originally assumed the IT teams would be able to bring back service within 48 hours—to adjust their expectations.

The key, Sayer says, was for them to understand that “this was going to be weeks just to get back to normal, and months before you have everything back to a normal, secure operating state. They challenged us on our thinking, so they were very engaged and educated themselves.”

It took thousands of person-hours to get to the point where the environment was strong enough to prevent reentry by the cybercriminals, who were watching the response team’s every move and tried for some time to work around every remediation effort that was put in place.

“It was a nine-hour skirmish,” Sayer recalls, “and it was like playing Whack-a-Mole. They were looking at what we’d done, then they would pop up on a server, then we’d go in and figure out what they were doing. They were very aggressive, but eventually they gave up and went away.”