Yes, you can measure cybersecurity efficacy

Recent examples show that AI and other measurement tools can provide meaningful assessments. Time to make cybersecurity efficacy a thing.

I hate to do this but consider the following thought exercise: Transport yourself back to fall 2020 when literally the entire world was waiting for a COVID vaccine. We knew there were a few candidates (in fact, one mRNA vaccine was formulated in late January) and were just waiting on the proof - the efficacy studies. Most of the world was elated to find out in early December 2020 that efficacy rates were 95%. Of course, some folks needed to know that a typical flu vaccine provides about 60% efficacy.

Now consider how you would have felt if, instead of conducting randomized control trials that tested outcomes from the vaccine, Pfizer and Moderna had asserted that the vaccine would work because the scientists who created it had strong credentials, the lab environment was properly managed, procedures were impeccably followed, and all the paperwork was in order. I’m not sure about you, but I would have been devastated and probably irate.

We follow a pattern like this routinely in cybersecurity. I’ll spare you the compliance audit tedium.

Measuring cybersecurity effectiveness

Now imagine a world in cybersecurity where we actually measure the effectiveness of our programs. Where we use the power and scalability of computers to perform the same types of tests considered a minimum requirement in other fields. Where we manage our control environments and assess the outcomes to determine the strength of our programs.

A common reaction to a proposal like this is to be snarky or even scornful, reminding the instigator (that’s what those of us who propose such things are often called) that computer environments are incredibly complex and an approach like this would be impossible. As if sequencing the 3 billion base pairs of the human genome and using that as a reference model for 7 billion humans full of cells dividing, neurons firing, and chemicals interacting is simple. 

The truth is that computing environments are actually easier to measure. The jury is still out about artificial intelligence benefits in cybersecurity (at least in a broad sense). However, one quick win is that to leverage AI, it must be able to ingest the data it is analyzing. Once the data is made available, it is trivial for computers to count the instances and elements of pertinent activity that could easily be used for this type of objective.

Cybersecurity efficacy use cases

Opportunities for efficacy experiments abound. For example, an organization could apply the same techniques Microsoft did in its Security Intelligence Report volume 20: “The MSRT reported that computers that were never found to be running real-time security software during 2H15 were between 2.7 and 5.6 times as likely to be infected with malware as computers that were always found to be protected.” A closer look at this data reveals an efficacy score of about 64%.

Or you could perform an experiment like Google and New York University did that concluded, “We show that knowledge-based challenges prevent as few as 10% of hijacking attempts rooted in phishing and 73% of automated hijacking attempts. Device-based challenges provide the best protection, blocking over 94% of hijacking attempts rooted in phishing and 100% of automated hijacking attempts.”).

While neither of these studies demonstrate quite the level of rigor as the efficacy studies done for COVID vaccines, they can easily be replicated and applied to specific enterprise environments.

The current practice of using PCI compliance audits to demonstrate program quality did nothing to keep Target from being breached (and was essentially retroactively revoked after the incident). Replacing periodic audits with empirical data from continuous measurement would revolutionize our understanding of diligence and negligence and provide key insight into the best ways to protect our environments. The caveat here is that no approach is foolproof. Heck, even with instant replay it is amazing how often the refs get calls “wrong” (usually when the call goes against my Eagles). But an empirical approach that could measure the nature and types of activity occurring in real-time, the number and types of controls being applied, and the ultimate outcomes would provide an objective level of empirical analysis head and shoulders above existing methods.

I was once told that cybersecurity efficacy “wasn’t a thing” and I had no response, because it was true. So, let’s make it one. 

Related:

Copyright © 2022 IDG Communications, Inc.

Make your voice heard. Share your experience in CSO's Security Priorities Study.