5 Lessons Learned from Log4j

The latest security vulnerability offers lessons learned that can help organizations move forward with improved security.

Security in computing

In December, a critical Log4j vulnerability known as Log4Shell impacted the world of security in ways that few vulnerabilities previously have. It’s clear by now that the potential for damage from this vulnerability is quite high, and will last for a very long time.

It’s hard not to compare Log4Shell with the emergence of EternalBlue over five years ago. Both are critical code injection vulnerabilities requiring patching, with severe consequences for those who ignore it. But unlike EternalBlue, which is only found in Windows, Log4Shell is present in a myriad of applications and is notoriously difficult to track. Those infected by EternalBlue were seen as victims, while those infected by Log4Shell are considered much more culpable by regulators. And while EternalBlue was almost immediately abused for the widespread infection of WannaCry, Log4Shell has yet to manifest a high-profile attack.

It’s critical we continue to learn from these events. I wrote extensively about the emergence and techniques used in the exploitation of Log4j in my Log4j retrospective series. Now I’d like to highlight the key takeaways. Many more lessons will be uncovered as the hunt to eradicate this vulnerability moves forward. However, there are already five fundamental learnings.

1.   The new norm

Both the complexity of software and the rate at which end users demand new features continue to grow rapidly and without bounds. To satisfy the needs of end users in the time frames required, developers must rely on a rapidly growing set of available libraries, language ecosystems, and third-party infrastructure and services. As a result, larger and larger portions of the functionality of any piece of software is composed of components the developers themselves may never have touched or understood fully.

In any software dependency graph, vulnerabilities are inherited from leaf nodes, or shared code and services, upward to the root node, or the product being programmed. As more and more of these leaf nodes are added to a project, as is necessary per the above, so too does the risk of a vulnerability increase.

This all leads to an inevitable conclusion: These types of vulnerabilities are not only here to stay, but will continue to expand in frequency and impact.

This is the new norm.

2.   Risk is recursive

We often incorrectly think of risk with respect to the systems, software, and functions we can directly control. More advanced organizations are beginning to assess risk one level out — for example, by asking their developers to examine the trustworthiness of a given library.

But as more systems and software continue to be composed upon layers and layers of third-party code, organizations will increasingly have to not only assess the risk of a given library or partner, but also the practices of that development community or vendor, to ensure they are examining their dependencies as well.

Every node in the dependency tree and supply chain should be assessed by you, your partners, and/or the respective development community to determine if tolerable risk levels are met.

3.   Visibility unlocks speed

Even with the above risk assessments in place, vulnerabilities are going to occur. We must accept this fact. The question is: How can we more effectively address the situation when it happens? It’s not how we can prevent it altogether.

To that end, visibility is paramount. Many organizations struggle with patching because they don’t know what machines are affected in the first place. Enterprises must have systems in place that provide visibility into what is running in the data center and cloud.

The more comprehensive and accurate the visibility is, the faster an organization can react and patch necessary assets.

4.   Filter out the obvious

Many vulnerabilities can only be attacked through a chain of exploits. Cutting off any piece of the chain is often enough to prevent full exploitation. As a result, systems that filter out both prior and obvious attacks are critical.

Organizations should prioritize the following systems:

  • Endpoint protection platforms (EPP)

Protect endpoints from known malicious software

  • Web application firewalls (WAF)

Protect web applications from known malicious payloads and threat actors — consider Akamai’s best-in-class Kona protection

  • DNS firewall

Protect endpoints from visiting malicious domains and filter out malicious DNS payloads — consider Akamai Enterprise Threat Protector

  • Secure web gateway (SWG)

Protect endpoints from downloading malware and visiting malicious sites on the internet — consider Akamai Enterprise Threat Protector

  • Multi-factor authentication (MFA)

Reduce the risk of stolen credentials allowing access into your enterprise where an exploit chain can be delivered — consider Akamai MFA

  • Identity-based segmentation

Restrict software and systems to communicating with only those machines necessary to complete their tasks — consider Akamai Guardicore Segmentation

  • Zero Trust Network Access (ZTNA)

Limit the impact of infected end users coming into the network — consider Akamai Enterprise Application Access

5.   Least privilege reigns supreme

Finally, organizations should fully embrace the principle of least privilege. Lock down servers, machines, and software so that they may reach only the systems required to perform their tasks.

For example, many of the systems that are making outbound LDAP calls as part of the Log4j exploit never had a need to utilize LDAP. Such systems should have firewalled-off access to LDAP. Another example: If a service only answers inbound requests, block outbound connections.

By applying the principles of least privilege to all systems and software in your control, you can greatly reduce the threat surface when a vulnerability arises, and in many cases, stop the attack chain before you are impacted.

The end … but not the end

I suspect we’ll continue to hear about Log4Shell in the coming months and potentially even years. Organizations with poor patching strategies and bad security practices will continue to dominate the headlines, falling prey to this vulnerability even as remediation strategies are available.

However, over time the headlines will eventually fade away, just as they have for EternalBlue. And it is in this history that we perhaps can leave with the most important lesson of all: There will always be another Heartbleed, Shellshock, EternalBlue, and yes, even Log4Shell vulnerability around the corner. The question is: Will your organization grow complacent over time and suffer the consequences when it happens, or will you be prepared? Get started here.

Charlie Gero is VP and CTO of the Security Technology Group at Akamai and leads its Advanced Projects R&D Group. He focuses on bleeding-edge research in the areas of security, applied mathematics, cryptography, and distributed algorithms, aiming to build the next generation of technologies to protect Akamai’s growing customer base. Through this research, he has secured nearly 30 patents in cryptography, compression, performant network systems, real-time media distribution, and more. Prior to his 15 years with Akamai, Gero founded a startup and served in key computer science positions in the pharmaceutical and networking industries. He holds degrees in physics and computer science.


Copyright © 2022 IDG Communications, Inc.