• United States




Best of breed: how secure are you, really?

Mar 15, 20185 mins
Data and Information SecurityNetwork SecurityRisk Management

If everyone claims to be the best, how can we rationally choose what we will deploy?

Every year at RSA I look for the innovators and the outliers, but I’m more likely to encounter one vendor after another offering pretty much the same “best of breed” security solution. How can so many vendors offering their products in the same product category claim to be the best of breed?  I find novelty far more interesting.

This is a serious question: If everyone claims to be the best, how can we rationally choose what we will deploy? But this begs the question, how do we actually measure the security posture of a large enterprise system?

Heed Lord Kelvin’s words

CISO’s must routinely ask questions such as, Is my organization secure? Are the personnel I protect sufficiently educated and trained to minimize the risks to the organization? Is my organization complying with regulations on managing and safeguarding sensitive data?

Lord Kelvin’s famous quote, “to measure is to know,” applies here. Or that oft-quoted business adage, “You can’t manage what you can’t measure.” But measurement alone is not enough to understand the effectiveness of any system. We have to also design systems for continuous measurement since so many of our systems change continuously, especially when we have chosen to deploy one of those best-of-breed solutions. Certainly, our adversaries change.

But, how do I measure the security risk of a new technology or service provided to our customers? Are my deployed best-of-breed security controls still working? These and many other related questions are often answered qualitatively, if at all, but rarely are hard measurements provided to objectively answer these questions. It is especially important to answer these questions longitudinally, over time, to understand whether the organization’s security posture has improved with the introduction of new security technologies, or not. Asking vendors provides little help, since a best-of-breed supplier is obviously offering claims they are the best solution. They all are. We must rely on a more structured approach to answering this question.

Security is a property of a system, one that is as hard to define in terms of an absolute measurement, perhaps as hard as measuring the beauty of a flower. Beauty is also a property, but only the mind’s eye that beholds it can judge its beauty. That’s not acceptable when protecting the core assets of a corporation. So, how does a modern CISO avoid an ad hoc security architecture design based solely on best-of-breed claims and decide to purchase and deploy with a modicum of assuredness that their security posture has indeed improved?

How might we measure security?

Red teaming and expert opinion are still the primary means of convincing a CISO that a particular security architecture is more secure than another, but these empirical measurements are limited and not entirely scalable. It is of course valuable to learn of any apparent weakness in a particular deployment, but covering all potential weaknesses remains an open question. So what might we do?

Designing services that maintain privacy of user data is perhaps a good analogy to consider. In Differential Privacy, for example, one considers how privacy is affected by some change to a system design, rather than trying to measure its absolute value. Relative measurement is possible.

Wouldn’t it be wise to build into a modern security architecture a continuous and automatic testing process to ensure the security posture of a system is maintained and not getting worse with time, as one or more new best-of-breed security controls are deployed? That’s an interesting idea. A security architecture that “self-tests” its own health. But what exactly do we test?

This is possible if we focus on the real prize: stopping the loss of sensitive data. That is what a security architecture must do. Is there a means to measure the propensity of a system to leak information from an enterprise network?

Tracking the quantity of data from source to sink may provide a means of answering these questions with sufficient confidence to make informed decisions. But tracking all data isn’t easy in scale. Perhaps a focused experiment where we know ground truth would satisfy our need to measure.

Think of dye injected into a fluid held within a membrane. The leakage of the dye through that membrane is an indicator of leakage. Simple. But what “dye” might we want to inject in an enterprise network and watch for its leakage, without purposely losing real and valuable data? Here is a thought, inject known fake data into a selected known location and see where it might go. Tracking and tracing data we purposely inject reduces the chance of leaking real data and we would know what to look for on its way out of the network utilizing, for example, a deployed DLP system. Simple. And this capability is within reach.

Deception in depth may provide measurement, too

A new category of deceptive security has recently emerged. (Yep, listen carefully to the newcomers at RSA.) Decoy technology is leading a new trend in securing enterprise networks, and we might be able to get double duty from this new technology.  Decoy documents could be one way to satisfy a defense in depth (deception in depth?) strategy that provides a new opportunity for detecting attackers in their early stage of probing and information gathering, but they may also serve as a key to measuring data leakage.

Strategically placing highly believable bogus documents could serve to measure exfiltration. (Who wouldn’t want a two-fer?) Red teaming experiments can be automated to store decoys in locations that appear to be where there are valuable documents. Decoys that leak, appear on public websites, or, better yet, beaconized decoy documents that signal when remotely opened, provide immediate indicators of data loss and clear measurements that a security architecture loses data and how much it loses over time.

So, how secure are you, really? Injecting dye with decoy documents may provide a continuous answer.


Salvatore Stolfo is a tenured Columbia University professor, teaching computer science since 1979. He is the co-founder and CTO of Allure Security, a DARPA-funded cybersecurity startup specializing in data protection and the prevention of data breaches.

Dr. Stolfo is a people-person. And that makes him unique in a field where folks focus on making machines. As professor of artificial intelligence at Columbia University, Dr. Stolfo has spent a career figuring out how people think and how to make computers and systems think like people. Early in his career he realized that the best technology adapts to how humans work, not the other way around.

Dr. Stolfo has been granted over 75 patents and has published over 230 papers and books in the areas of parallel computing, AI knowledge-based systems, data mining, computer security and intrusion detection systems. His research has been supported by numerous government agencies, including DARPA, NSF, ONR, NSA, CIA, IARPA, AFOSR, ARO, NIST, and DHS.

See his full academic bio at Columbia University for more background.

The opinions expressed in this blog are those of Salvatore Stolfo and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.