Keeping security (and alerts) in context

When working to improve their security alerting and response models, CSOs might consider the context of what's getting reported as critical metadata in evaluating system behavioral characteristics.

alarm ambulance emergency red
TBIT (CC0)

Another RSA Conference is barely in the history books, and every vendor where we stopped to collect something is already making the calls.

It was Number 22 for me, and as I sit in my hotel room writing this blog, I reflect on the last two decades and wonder about the evolution of our industry and ask:

“What have we learned?”

In 2000, all the rage was the “new” report of the “Stacheldraht” attack that took place on Global Crossing over Valentine’s weekend. Our team at BindView’s RAZOR, including the likes of such luminaries as Dr. David Mann, “Simple Nomad,” and Todd Sabin, quickly responded with a counterpoint to the early version of what has become the staple of the hacker repertoire, the Distributed Denial of Service (DDoS) attack.

That was 16 years ago, we’re still talking about these insidious attacks—and they’re not going away any time soon. Fact is, with the growing trends in how ransomware is targeting victim enterprises and the rising risks associated with DDoS and IoT attacks, the bad guys are using pretty much the same techniques, but they’re just wrapping them in more sophisticated delivery models.

SANS Fellow Ed Skoudis, noted how 21 years hasn’t moved the needle forward on some of the more publicized issues: “There are over 150 different active families of crypto ransomware available today,” said Skoudis in his RSA panel discussion, with some referencing attack models that go back to 1996.

And despite two Moscone Centers full of vendors claiming they have solved everything from polymorphic ransomware to IoT security gaps, these types of attacks are still causing most of the trouble for security practitioners.

“When we look at the emerging patterns of innovation in security,” said my friend and former colleague, VMWare’s Dr. Dennis Moreau— “some fresh thinking is coming to light about how data is configured and used, despite the bewildering array of new stuff we see being hawked on those showroom floors downstairs.” Dennis and I were discussing the virtues and vices of adopting new technologies in the face of CSOs already trying to filter through countless audit logs, IPS signatures and other “non-contextual” alerts.

Do we know too much?

“Even though we’ve seen the same kinds of vulnerabilities for years,” Dr. Moreau said, “Emerging architectures these days are posing new kinds of challenges to security, particularly in the areas of policy, behavior and analytics, but if we don’t address the basic rules of who has access to what and how, these problems will continue to haunt us.”

Moreau noted, for example, why the DDoS issue keeps coming back to haunt us is mainly due to gaps in contextual methods for analyzing traffic build-up and attack methodologies. “One way to change the game, is via context, He said. “We need to know more about, first, what actually sends traffic and, second, more about what is legitimate traffic vs. what is attack volume. We know neither today, as evidence by the number of compromised devices sending wholly unintended (unintended by either the owner or manufacturer) traffic.”

Moreau shared a few thoughts on how configuration-based errors still sit at the center of many of the problems CSOs face with tackling infrastructure security. “Businesses are forever trying to gain mobility and flexibility as they move into more advanced ways of sharing data,” he said. “So the key to safe-keeping of that shared data is in tightly controlling those configurations, and ensuring a zero-tolerance in accepting any variables.”

The continued misconfiguration and misalignment of existing controls, as five years’ worth of Gartner studies have declared, are leveraged in 95 percent of reported compromises, and it’s us users who are still the reason for the errors—so says IBM Security Services. “That generally suggests that the existing tools are not being managed effectively,” Moreau noted, “resulting in extraordinarily large rulesets at the firewalls, at the IPS, in the SIEM, etc., rendering whitelists virtually ineffective.”

It all comes down to context

Rules fire against some type of defined policy—rules that are generated from our controls boundaries and devices, and there are countless rules (whether SNORT rules coming off an IPS, they might be YARA rules inside of a sandbox, or wherever), continuing to send more and more data to collection points where analysts face a growing challenge of sifting through what Dennis reinforced as “non-contextual data.”

The message, then, is that when something is triggered, it’s basically responding to a known set of policy expectations, based on the expected characteristics of how that particular device, file, or application is not supposed to perform or operate.

As operations continue to advance how data is managed, the issue of contextual analysis becomes more relevant. The general thought on the matter is that by considering “Context” as a key to addressing and minimizing risk to exploits (both old and new), “All of the information being presented by these alert mechanisms are important,” Moreau said. “But analysts need to be able to look at all of the intentional information as well—including the structure of the applications themselves, their respective system configurations, and including this very information-rich, slow-varying context into my analytics environment. We need to see both—the behavioral indicators as well as the contextual indicators together.”

It’s all about signal-to-noise behavior according to Moreau: “The ability to see behavior at different points throughout the computing infrastructure, and use it coherently, improve the signal-to-noise ratio in any form of analytics, and it gives us a better action ability as a consequence.”

7 things to do today

The net-net from both Moreau and Skoudis’ comments might suggest that despite the increased complexities associated with operating in a more liquid (and potentially volatile) computing environment, getting back to basics might be the best place to look for answers:

  1. Establish a foundation for what Skoudis calls, “System and Network Security Hygiene.” Follow a specific set of proven security controls that have been defined to help systems operate more safely in their respective environments. Many resources are available to help identify baselines, including OWASP and the NIST Common Security Framework.
  2. Watch your network shares. “Having too many network shares between mobile devices is asking for trouble,” Skoudis explained. Network shares should be available on file servers only when there is a clearly articulated business need, with permissions being strictly controlled to those shares.
  3. Change default passwords. More than 20 years and we are still seeing this as a basic tenet of good security “hygiene” that is often underestimated. While some vendors may not allow users to update or change a default password, updating the firmware is also essential in users maintaining strict control over their infrastructures (and the all-important configuration of those devices).
  4. A lot of internet devices are enabled via telnet. Skoudis promotes the idea of turning them off immediately. “Better to use ssh and HTTPS for communication protocols,” he said.
  5. Map your environment—including all segmentation, containers, application footprints (on-prem, in the cloud and mobile).
  6. Use tags to expose classification, intentions, and observations at container, segment and compartment boundaries. This is critical, said Moreau, in determining contextual analysis of events. “Tags and policies have both structure and compartmental restraints on whether a tag represents an intention or an event that is observed and based on a predefined set of policy parameters (like PCI/DSS ).” The guys at Aqua have written about the potential gaps between DevOps adoption and growing security risks. “Can security be ingrained into the development-to-production process?”  Here’s the eBook from Aqua that addresses this concern.
  7. “Drain the swamp of vulnerabilities.”—Skoudis said that every organization should require penetration tests, through which vulnerabilities may be ferreted out. “Many of the vulnerabilities are just really trivial, like cross-site scripting.”

Moreau believes that more granular adherence to policies, mapped more tightly to contextual characteristics of how a configuration is expected to behave, will result in fewer false alarms, and increase the ability for security teams to respond to specific issues: “Multiple points of reference to how events take place can reinforce what they should be doing, which immediately highlights what shouldn’t be happening.”

And back to my original question: What have we learned after more than 20 years of RSA conferences?

Well, after collecting a bag of tchotchkes, and scores of selfies with colleagues, my friends and I concluded that RSA — above all else — is something of a hybrid class reunion of sorts, and that the problems CSOs most likely will face on a day-to-day basis will get solved the old-fashioned way: understanding the relationship between “Assets” and “Access” and how to manage both in a more complex computing environment.

Keeping both in context with how your computing environment is supposed to behave, though, may be a good place to start looking for answers (and discrepancies).

Next post, we will look more into the increase in dependability on containers and how the DevOps movement is affecting Moreau’s position on context-based configuration assessments to reduce the risk of exposure. By the way, you can listen to Dennis Moreau’s full RSA presentation here.

Send out an alert for your comments to Facebook.

This article is published as part of the IDG Contributor Network. Want to Join?

SUBSCRIBE! Get the best of CSO delivered to your email inbox.