Anton Chuvakin from Gartner recently blogged about the overall low maturity in cyber security. He made some interesting points. Especially on how vendors, investors and the media rely on flawed statistics, surveys and a fair dose of wishful thinking in assessing the security maturity of the average enterprise, projecting market growth and product viability.
I had the same experience, at Gartner and as a penetration tester. For example, I never conducted a project where I needed more than off-the-shelf open source tools and known exploits to breach an organization. Never had to grab deep into the trick box. Basic approaches were sufficient.
Even more surprising was the average reaction of the organizations being assessed. Often the findings were disputed, not understood or taken seriously. Based on feedback from other penetration testers, my experience is anything but unique – it is the rule.
Partially, this can be explained by the Dunning-Kruger effect, which states that people with little skill or knowledge overestimate their own ability. Another contributor is only having local knowledge. You don’t know how good or bad others are at security, so you think you are doing well.
Lastly, there’s a common misunderstanding about what constitutes “good security.” Many organizations have reduced this concept to a pure checkbox exercise, without understanding why the boxes must be checked. Good cyber security is not determined by an organization itself, or by comparison with other enterprises.
Good cyber security is measured by the success or failure of our adversaries. It is the attackers who determine most of the rules of engagement. IT security’s job is to prevent them from gaining access to the infrastructure, data and whatever else they consider valuable assets. Anything short of that, and the organization is in a security “bad” place.
Security maturity helps prevent adversaries from completing the full cyber kill chain. Yet there are many “failings” when it comes to understanding what maturity represents. Let’s consider the top three.
1. Relying on prevention
Too many organizations rely on prevention, by putting their faith in antivirus, vulnerability management, intrusion prevention systems and firewalls. Cyber security professionals typically view Hollywood portrayals of protagonists “breaching the firewall” as unrealistic. Yet, in this case, fiction is closer to reality.
Let’s be clear, there has never been a time in the history of computing when a pure preventative approach has been unbreakable. Hackers have always known about inherent weaknesses in signature based approaches, and have been incredibly innovative in devising evasion and bypass techniques.
That is not to say that prevention has no place in a modern security architecture. Prevention is one of the three pillars of threat management alongside detection and response. We prevent what we can. We detect and respond to the rest. The benefit of presenting a hardened attack surface means an adversary will have to work harder to gain access, and will be forced to create enough “noise” to be detected. Nevertheless, a purely prevention-based approach is about as effective as the French Maginot line was in keeping Germany out in World War 2.
This failing, sadly, will not die.
Machine learning has given the prevention faithful another lease on life. While it may improve the effectiveness of preventative approaches, the improvement is only marginal. True prevention requires a 100% detection rate without false positives and negatives. It also does not do well with the fact that there is an intelligent adversary who is able to adapt and evolve their techniques, tactics and procedures (TTP).
2. Relying on technology
This failing goes hand in hand with #1. This approach supplements prevention with monitoring techniques. The problem with monitoring is that it is deployed with the same mindset – wait for the technology to alert on threats. The Target and Home Depot breaches are good examples of this failing in action. In both cases, the organizations had alerts and indicators of the breach in progress.
Good security, and especially effective monitoring, requires people and processes. So while dedicated monitoring technologies increase the scope and volume of data sources being evaluated, they also increase the amount of false positives inherent in more basic detection approaches. As Augusto Barros recently wrote, we are nowhere near having a “brain in the vat”, a.k.a. true artificial intelligence, that would allow is to replace people and let monitoring run in a fully automated manner.
3. No management buy-in
Having the right technology, people and processes is fundamentally tied to one thing: management buy-in. If executives understand that cyber security is critical to the business, the organization will have a greater focus on it. In addition, good security requires teeth. It is still common for the average employee to be unaware, or wilfully ignore security best practices. Sadly, this extends to executives as well. Since executives are frequently the target of spear phishing attempts, and are typically responsible and liable for the consequences of a breach, this is a sure indicator of a lack of buy-in.
Organizations with a high security maturity have executives who “get it,” or are at least recognize they “don’t get it” and are willing to consider and act upon advice from their security leaders.
This may sound “easy,” yet remains a stumbling block. Primarily because growth and profitability have been the main priority of most businesses, at the expense of good cyber security. This mindset must change. As a large number of recent breaches have demonstrated, good security is increasingly related to business success and continuity.
Management buy-in is not the last step in the security maturity curve, it should be the first.