• United States



Contributing Writer

Common pitfalls in attributing cyberattacks

Oct 16, 20206 mins

Attack attribution is always difficult as criminal groups often share code and techniques, and nation-state actors excel at deception. Here, security researchers share their techniques and common pitfalls.

Security system alert, warning of a cyberattack.
Credit: Matejmo / Getty Images

Attributing cyberattacks to a particular threat actor is challenging, particularly an intricate attack that stems from a nation-state actor, because attackers are good at hiding or erasing their tracks or deflecting the blame to others.

The best method for arriving at a solid attribution is to examine the infrastructure and techniques used in the attack, but even then, researchers can often get it wrong, as Paul Rascagneres and Vitor Ventura of Cisco Talos illustrated in a talk at the VB2020 conference on September 30.

Researchers typically rely on three sources of intelligence, Rascagneres said: open-source intelligence (OSINT), which is publicly available information on the internet, technical intelligence (TECHINT) that relies on malware analysis, and proprietary data available only to the organizations involved in the incident.

Nation-state intelligence agencies serve as another source of intelligence because they have more information and additional resources than the private sector, but intel agencies are often secretive about their methods. “In public sectors, they don’t give everything,” Rascagneres said. “They don’t explain how they get all the detail. How does it make the link?”

Attributing WellMess malware

Rascagneres walked through an example of analyzing infrastructure and how that can mislead a security researcher. He presented the example in terms of what security investigators call tactics, techniques and procedures (TTP). He focused on the case of multiplatform malware named WellMess discovered by the Japanese national CERT in 2018.

The UK’s National Cyber Security Centre (NCSC) directly attributed the WellMess malware to APT29, a Russian state-backed threat group better known as Cozy Bear.  That assessment was endorsed by Canada’s Communications Security Establishment (CSE), the US’s National Security Agency (NSA), and Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA).

WellMess, which extracts information from infected hosts while awaiting further instruction, has 32-bit and 64-bit variants and has multiple protocols to conduct C2 communications including DNS, HTTP and HTTPS. By looking at the infrastructure, researchers can deduce connections between malware samples. For example, if malware A uses infrastructure X and malware B associated with a threat actor M also uses infrastructure X, the attacks are linked through the shared infrastructure. 

This technique can be used to investigate shared IP addresses and domains because a lot of different customers can use the same IP addresses. “Based on the IP only, it’s a bit tricky and you can easily make a mistake,” Rascagneres said.

“The other important thing is the time lapse. The IP address can change quickly from customer to customer. If you have a threat actor that uses a specific IP address on this date and you see another campaign one year later, you have a lot of change at the IP and it’s not linked.”

Based on an analysis of the IP addresses, WellMess appeared to be malware that originated with APT28, also known as Fancy Bear, not APT29, an attribution that would run counter to what the UK authorities found. Even using TPP, the attribution analysis could lead researchers to the wrong threat group.

Using another WellMess sample, Rascagneres examined the TPP and found a connection between WellMess and the DarkHotel attack group when running the sample through VirusTotal. The DarkHotel group is believed to operate out of the Korean peninsula and steals valuable data from high-level targets such as CEOs. The group’s name is derived from DarkHotel’s method of tracking travelers’ plans and compromising them via hotel Wi-Fi.

Further complicating an analysis of this sample was a report by Chinese security company CoreSec360 that WellMess was an entirely unknown actor they named APT-C-42.  But, despite using the three sources of intelligence available to private sector parties, the attribution analysis was unable to reach the conclusion obtained by the NCSC that the attackers were APT29. (A more detailed explanation of Rascagneres’s analysis can be found in this paper.)

Analyzing shared code

Analyzing shared code is another commonly used techniques in the attribution process. “We do it because we can see that this sample belongs to that sample and then later down the road” the second sample might be linked to a country or to a group, Ventura said. “Then because of that we are able to do the jump forward to the attribution.”

However, Ventura warns would-be attribution researchers that the opposite can be true, particularly when the shared code is publicly available. In those cases, the code similarities can lead to wrong attribution. One researcher Ventura knows tied together two malware samples because of shared code. Later, however, it turned out the code was actually part of an embedded TLS library, a public source widely used by developers and unlikely to provide reliable strong links between two malware samples. “In this case, we have all the overlaps but when we do the further research, when we look into the rest of the information that we have available, we actually disprove our theory.”

Beware false flags

False flags might also lead researchers to a wrong conclusion, Ventura said. The most recent high-profile public example of a false flag is the Olympic Destroyer malware that hit the PyeongChang Olympics in South Korea in 2018. Security experts who examined the malware attributed it variously to Russia, Iran, China and North Korea due to the false flags embedded in the malware to confuse researchers.

When it comes to false flags, this is where the intelligence agencies “have the upper hand,” Ventura said. “They have information that we don’t have. They have SIGINT [signals intelligence], they have human intelligence. They have all kinds of information that typically we don’t have. We don’t have a lot of information.”

Ventura touched upon what may be a source of frustration for security researchers, the inability to ever know why intelligence agencies make their attributions. “For us researchers, that makes us extremely uncomfortable because we like verifiable things. I cannot verify it because we just don’t have the information.”

The best coping mechanism in this case is to simply accept that some things may never be known. “That’s a point that we need to accept,” Ventura said. “Not that we should accept all that they say as set in stone, but we need to accept that they might not be able to share that information.”