Another RSA Conference is barely in the history books, and every vendor where we stopped to collect something is already making the calls.It was Number 22 for me, and as I sit in my hotel room writing this blog, I reflect on the last two decades and wonder about the evolution of our industry and ask:\u201cWhat have we learned?\u201dIn 2000, all the rage was the \u201cnew\u201d report of the \u201cStacheldraht\u201d attack that took place on Global Crossing over Valentine\u2019s weekend. Our team at BindView\u2019s RAZOR, including the likes of such luminaries as Dr. David Mann, \u201cSimple Nomad,\u201d and Todd Sabin, quickly responded with a counterpoint to the early version of what has become the staple of the hacker repertoire, the Distributed Denial of Service (DDoS) attack.That was 16 years ago, we\u2019re still talking about these insidious attacks\u2014and they\u2019re not going away any time soon. Fact is, with the growing trends in how ransomware is targeting victim enterprises and the rising risks associated with DDoS and IoT attacks, the bad guys are using pretty much the same techniques, but they\u2019re just wrapping them in more sophisticated delivery models.SANS Fellow Ed Skoudis, noted how 21 years hasn\u2019t moved the needle forward on some of the more publicized issues: \u201cThere are over 150 different active families of crypto ransomware available today,\u201d said Skoudis in his RSA panel discussion, with some referencing attack models that go back to 1996.And despite two Moscone Centers full of vendors claiming they have solved everything from polymorphic ransomware to IoT security gaps, these types of attacks are still causing most of the trouble for security practitioners.\u201cWhen we look at the emerging patterns of innovation in security,\u201d said my friend and former colleague, VMWare\u2019s Dr. Dennis Moreau\u2014 \u201csome fresh thinking is coming to light about how data is configured and used, despite the bewildering array of new stuff we see being hawked on those showroom floors downstairs.\u201d Dennis and I were discussing the virtues and vices of adopting new technologies in the face of CSOs already trying to filter through countless audit logs, IPS signatures and other \u201cnon-contextual\u201d alerts.Do we know too much?\u201cEven though we\u2019ve seen the same kinds of vulnerabilities for years,\u201d Dr. Moreau said, \u201cEmerging architectures these days are posing new kinds of challenges to security, particularly in the areas of policy, behavior and analytics, but if we don\u2019t address the basic rules of who has access to what and how, these problems will continue to haunt us.\u201dMoreau noted, for example, why the DDoS issue keeps coming back to haunt us is mainly due to gaps in contextual methods for analyzing traffic build-up and attack methodologies. \u201cOne way to change the game, is via context, He said. \u201cWe need to know more about, first, what actually sends traffic and, second, more about what is legitimate traffic vs. what is attack volume. We know neither today, as evidence by the number of compromised devices sending wholly unintended (unintended by either the owner or manufacturer) traffic.\u201dMoreau shared a few thoughts on how configuration-based errors still sit at the center of many of the problems CSOs face with tackling infrastructure security. \u201cBusinesses are forever trying to gain mobility and flexibility as they move into more advanced ways of sharing data,\u201d he said. \u201cSo the key to safe-keeping of that shared data is in tightly controlling those configurations, and ensuring a zero-tolerance in accepting any variables.\u201dThe continued misconfiguration and misalignment of existing controls, as five years\u2019 worth of Gartner studies have declared, are leveraged in 95 percent of reported compromises, and it\u2019s us users who are still the reason for the errors\u2014so says IBM Security Services. \u201cThat generally suggests that the existing tools are not being managed effectively,\u201d Moreau noted, \u201cresulting in extraordinarily large rulesets at the firewalls, at the IPS, in the SIEM, etc., rendering whitelists virtually ineffective.\u201dIt all comes down to contextRules fire against some type of defined policy\u2014rules that are generated from our controls boundaries and devices, and there are countless rules (whether SNORT rules coming off an IPS, they might be YARA rules inside of a sandbox, or wherever), continuing to send more and more data to collection points where analysts face a growing challenge of sifting through what Dennis reinforced as \u201cnon-contextual data.\u201dThe message, then, is that when something is triggered, it\u2019s basically responding to a known set of policy expectations, based on the expected characteristics of how that particular device, file, or application is not supposed to perform or operate.As operations continue to advance how data is managed, the issue of contextual analysis becomes more relevant. The general thought on the matter is that by considering \u201cContext\u201d as a key to addressing and minimizing risk to exploits (both old and new), \u201cAll of the information being presented by these alert mechanisms are important,\u201d Moreau said. \u201cBut analysts need to be able to look at all of the intentional information as well\u2014including the structure of the applications themselves, their respective system configurations, and including this very information-rich, slow-varying context into my analytics environment. We need to see both\u2014the behavioral indicators as well as the contextual indicators together.\u201dIt\u2019s all about signal-to-noise behavior according to Moreau: \u201cThe ability to see behavior at different points throughout the computing infrastructure, and use it coherently, improve the signal-to-noise ratio in any form of analytics, and it gives us a better action ability as a consequence.\u201d7 things to do todayThe net-net from both Moreau and Skoudis\u2019 comments might suggest that despite the increased complexities associated with operating in a more liquid (and potentially volatile) computing environment, getting back to basics might be the best place to look for answers:Establish a foundation for what Skoudis calls, \u201cSystem and Network Security Hygiene.\u201d Follow a specific set of proven security controls that have been defined to help systems operate more safely in their respective environments. Many resources are available to help identify baselines, including OWASP and the NIST Common Security Framework.Watch your network shares. \u201cHaving too many network shares between mobile devices is asking for trouble,\u201d Skoudis explained. Network shares should be available on file servers only when there is a clearly articulated business need, with permissions being strictly controlled to those shares.Change default passwords. More than 20 years and we are still seeing this as a basic tenet of good security \u201chygiene\u201d that is often underestimated. While some vendors may not allow users to update or change a default password, updating the firmware is also essential in users maintaining strict control over their infrastructures (and the all-important configuration of those devices).A lot of internet devices are enabled via telnet. Skoudis promotes the idea of turning them off immediately. \u201cBetter to use ssh and HTTPS for communication protocols,\u201d he said.Map your environment\u2014including all segmentation, containers, application footprints (on-prem, in the cloud and mobile).Use tags to expose classification, intentions, and observations at container, segment and compartment boundaries. This is critical, said Moreau, in determining contextual analysis of events. \u201cTags and policies have both structure and compartmental restraints on whether a tag represents an intention or an event that is observed and based on a predefined set of policy parameters (like PCI\/DSS ).\u201d The guys at Aqua have written about the potential gaps between DevOps adoption and growing security risks. \u201cCan security be ingrained into the development-to-production process?\u201d\u00a0 Here\u2019s the eBook from Aqua that addresses this concern.\u201cDrain the swamp of vulnerabilities.\u201d\u2014Skoudis said that every organization should require penetration tests, through which vulnerabilities may be ferreted out. \u201cMany of the vulnerabilities are just really trivial, like cross-site scripting.\u201dMoreau believes that more granular adherence to policies, mapped more tightly to contextual characteristics of how a configuration is expected to behave, will result in fewer false alarms, and increase the ability for security teams to respond to specific issues: \u201cMultiple points of reference to how events take place can reinforce what they should be doing, which immediately highlights what shouldn\u2019t be happening.\u201dAnd back to my original question: What have we learned after more than 20 years of RSA conferences?Well, after collecting a bag of tchotchkes, and scores of selfies with colleagues, my friends and I concluded that RSA \u2014 above all else \u2014 is something of a hybrid class reunion of sorts, and that the problems CSOs most likely will face on a day-to-day basis will get solved the old-fashioned way: understanding the relationship between \u201cAssets\u201d and \u201cAccess\u201d and how to manage both in a more complex computing environment.Keeping both in context with how your computing environment is supposed to behave, though, may be a good place to start looking for answers (and discrepancies).Next post, we will look more into the increase in dependability on containers and how the DevOps movement is affecting Moreau\u2019s position on context-based configuration assessments to reduce the risk of exposure. By the way, you can listen to Dennis Moreau\u2019s full RSA presentation here.Send out an alert for your comments to Facebook.