How to implement and use the MITRE ATT&CK framework

The MITRE ATT&CK framework is a popular template for building detection and response programs. Here's what you'll find in its knowledgebase and how you can apply it to your environment.

1 2 Page 2
Page 2 of 2

However, this is a lot of work. If don’t want to create everything from scratch, you have two options: open source tools — such as Osquery, Filippo Mottini’s Osquery with reference detection implementations, the Kolide agentless Osquery web interface and remote API server by Olaf Hartong — or commercial endpoint security platforms.

Facebook developed OSquery to manage its server infrastructure. It is well implemented and supported by the community. It gathers information across hosts in your environment and aggregates the data into tables. You use SQL like queries to access the data in the tables and to write detections, so the learning curve is not as steep for people who have exposure to relational databases.

OSquery can create collections of queries that map to targeted TTP in ATT&CK for threat hunting. Hunters can create and execute ad hoc queries on the fly and those queries that identify attackers on the network can be integrated with your security information and event management (SIEM) system. In addition, smart people like Filippo Mottini and Olaf Hartong have already created reference detection implementations with OSquery that you can build upon.

Evaluated commercial tools that support MITRE ATT&CK fall under the general category of endpoint security platforms. There are several things to consider when purchasing a tool:

  • What data sources does it support?
  • To what degree and which subdata source does the tool support a particular data source?
  • Which techniques are covered and to what extent?

Understand the capabilities and limitations of different tools

Make sure that the tool supports the specific data sources in your environment to detect different MITRE ATT&CK techniques. If the tool you are considering integrates with directories and you have LDAP, make sure that the tool’s vendor doesn’t assume that you have Active Directory.

Also, understand what level of subdata source support the tool provides. If the tool advertises support for Windows Registry, does it support the creation, deletion, modification and access of registry keys? Verify, vet or test the assumptions and features that you will rely on from the tool’s vendor.

Look at whether the tool integrates with your SIEM and security orchestration, automation and response (SOAR) infrastructure. You may be able to push data to or get data from your SIEM and SOAR infrastructure to provide richer findings with fewer false positives.

Finally when reviewing the products make sure you understand the differences in warning types. For example, tools provide different levels of information related to detections. Some tools provide informational type events while other tools provide specific references to MITRE ATT&CK techniques and deeper explanation about the events. The richer the information provided the less guess work your blue team has to do to triage the event. For the purpose of describing the levels of event description richness in the MITRE ATT&CK tools evaluations, they go from:

None (lowest) →

Telemetry (info) →

IOC (signature-based identification of problem) →

Enrichment (telemetry + ATT&CK correlated information) →

General behavior (alert “finding” but without specific details) →

Specific behavior (alert “finding” with specific details explaining how the finding is malicious can be tied to ATT&CK technique related information)

Maintain the ATT&CK detection lifecycle

Building detections requires thinking about how attackers will implement the different ATT&CK techniques with their own procedures, understanding how those procedures work and ultimately how to detect the procedures. Once you have this, then you can build, test, deploy, fine tune, disable and periodically verify the detections. The process encompasses the detection life cycle (DLC) described below:

Detection ideation

You will need a system to keep track of detections as they move from inception to each phase of the DLC. People get ideas on how they can tackle implementing detections for specific important techniques. Sometimes only bits and pieces of a solutions materialize, but there needs to be a central store for this information so different people can see the detections status and build on each other’s knowledge while learning about different detection methods.

Detection creation

Once the detection’s status has moved from ideation to ready for implementation, it can be claimed for development. The blue team can look at the description of the detection and implement it. Once finished and tested locally, the code needs to be checked in and referenced in the DLC management system. Then the status for the detection needs to be modified to “ready for testing”.

Detection testing

Once a detection is ready for testing. It needs to be deployed to an integration test environment that provides output to use to anticipate the number of events generated by the new detection before the security analysts have to review them. This is to gauge how well the detection works in an environment similar to production.

The number of events generated per time period are gathered as well as the actual events themselves for review by the detection developer. Once the testing period has completed, the detection is put in “ready for review”. The detection developer and a more senior detection developer needs to review the results and both need to approve the detection before it is placed into production. If the detection produces too many false positives, the detection logic can be changed and status marked “ready for testing”.

Detection deployment

Once the detection has been approved by two members of the detections development team, it moves to “ready for production”. At this point team members installing detections into the production environment deploy the detection into the production environment. Initially, full logging is turned on and all data is gathered for a two-week break-in period to account for deployment during prolonged corporate holidays. At this point security analysts need to be apprised of the new detection so they know how to evaluate events generated from the detection and are given an opportunity to ask questions related to the new event.

Detection break-in period

After the two-week break-in period (or sooner if there are problems), the Detection developer reviews the logs, events and any other pertinent information related to the detection. In addition, the testing team should have validated that the detection is working in the production environment by manually creating events that the new detection should identify.

Detection enhancement

The detection developer analyzes the information and makes adjustments to the detection to address identified problems. If there are problems, the detection is set to “ready for testing”. If there are no problems, then the detection’s status is changed to “finalized for production”. This triggers the production deployment teams to open the events to the security analysts for triage.

Detection tracking

Full logging is turned down but metrics related to the detection are still collected for tracking and validation.

Periodic validation

All detections need to be periodically assessed for proper functioning and relevancy. In addition, detections need to be changed to account for new tools and techniques used by adversaries (threat intelligence).

Threat hunting with ATT&CK

Building detections and running them through the DLC takes time. While your detections are being built you can actively pursue threat actors in your network with MITRE ATT&CK.

Because MITRE’s ATT&CK represents a taxonomy of behavioral TTP attackers use to compromise corporate networks, it can also be used to direct efforts and find active actors in your network by focusing on non-implemented detection use cases. You need to understand what the critical assets of the corporation are and what attackers would likely target and why. Then use the MITRE ATT&CK taxonomy to focus on manually detecting techniques used to compromise the critical assets.

If you can focus your hunting efforts around techniques that have not been covered by implemented detections, you can use the information gained going through the hunt to help develop associated detections. Finally, the Hunt should focus on techniques that are difficult to implement as detections.

Gamifying ATT&CK

Going through the MITRE ATT&CK process may become robotic and monotonous. To keep the process from becoming an assembly line, use gamification to make the process fun.

Have groups try to bypass the detections and detecting techniques continually and one-up each other. Continually reward both sides. It is not about one side beating the other. It is about both sides continually improving. For example, if the red team finds a way around a detection, celebrate with both the red and blue teams. When the blue team proactively learns about the bypass and anticipates the modification of a technique used previously to bypass a detection, celebrate again with both teams.

Keeping up with the Joneses

Attackers and red-teamers are constantly upping their game. Attackers have the same access to MITRE’s ATT&CK framework as you do so they know what data sources you are using and how you may be trying to detect them. You need to be aware of the techniques that attackers use to bypass evasion and account for them.

William Burgess gave a phenomenal presentation called Red teaming in the EDR age that highlighted the techniques that advanced red teamers use to avoid attack detection:

  • Misdirection: Create false and misleading information that makes endpoint detection and response solutions useless. For example, create an initially suspended process that is legitimate and logged but then modify its runtime commands and parameters and resume the process, causing the attacker-provided command execute and not be logged. Then rewrite the command back to its legitimate version so runtime analysis (Process Explorer) is fooled,
  • Minimization: Avoid creation of processes from traditional parent processes or use a legitimate process in combination with injecting a reflected DLL into the legitimate process to execute the desired commands.
  • Memory evasion: Hide tell-tale signs of memory exploitation via reflective loading, process hollowing, process scheduling and hiding malicious processes in read-only segments.

All the above techniques need to be accounted for.

Special thanks to Filippo Mottini, Roberto and Jose Rodriguez for reviewing this article.

Copyright © 2019 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
Get the best of CSO ... delivered. Sign up for our FREE email newsletters!