• United States



by Pete Lindstrom, Spire Security

Building out your strategic security metric framework

May 12, 20118 mins
Data and Information SecurityIT JobsIT Leadership

To define your framework, take Willie Sutton's advice and go where the money is. In our case, risk and security lay with the IT and information assets in our environment.

For years now, security professionals have been in agreement that a security metrics program is an increasingly important tool to manage the security posture in an environment. We like to cite too-true cliches like “you can’t manage what you don’t measure” and sing “Kumbaya” together about the virtue and benefits of programs. And yet there really aren’t many success stories out there.

Some programs focus heavily on operational metrics. These enterprises are managing their cost centers in traditional business ways — comparing the “work” output from a security group and comparing it to the “resource” inputs — usually personnel, time, and costs. So they learn how many security professionals it takes to change a light bulb but not whether it brings any illumination to the program. To be sure, there is value in these metrics the same way there is value in any cost-center management tools. We try to reduce our cost per work-unit efficiency just as a matter of principle.

But these metrics don’t help us answer that one, overriding, agonizing, mildly annoying question that we usually get during a chance encounter with a senior executive in the elevator: “So, are we secure?

Sure, it’s not that simple, and that’s the point. In order to be invited to discuss strategic matters with senior management, we have to be able to competently answer that question. It’s the one question to rule them all, and we know it.

Many security professionals think they are going to answer that question by putting together a metrics program that is a bunch of numbers related to some control framework like COBIT or ISO 27002. At best, the individual metrics are organized alphabetically. At worst, they are recapitulated on a page every month and could be used as a pseudo-random-number generator to seed our encryption algorithms. But these are not the numbers to put in front of senior management.

What the business executive wants to hear is that quick and dirty elevator pitch. What are these numbers and why should I care? So the successful metrician will compile a narrative that describes the numbers:

“Last month, our IT and information assets generated $20 million in revenue in support of 15,000 people using 350 applications. To accomplish this feat, over 32 million connections were attempted across our systems and we applied specific control measures an average of 2.4 times per connection to ensure the completeness and accuracy of our transactions. As a result, over 4 million connections were blocked instantly for not meeting our basic requirements (with 99.75 percent success rate) and we identified 1,700 suspect connections that required further analysis. We ultimately determined that five of those 1,700 were attempted intrusions which we subsequently acted upon according to established procedures. There were no losses associated with the incidents.”

To define your framework, take Willie Sutton’s advice and go where the money is. In our case, risk and security lay with the IT and information assets in our environment. System activity between sources and destinations in the form of connections and transactions drive the value in the organization. As security professionals, we apply controls to the transactions. Sometimes, control failures result in incidents and these incidents have losses. Each of these categories requires a closer look.

IT and information asset value are why technology exists

Technology exists to add value to an organization, either by increasing revenue or decreasing costs. This is a simple mission that is executed in many ways. Sometimes, it provides the mechanism for transactions with customers and partners around the world. Sometimes it tracks the income and outgo of our financial accounts. Sometimes, it simply exists to make us more productive.

Capturing value-based metrics lay more in the realm of the CIO than anyone else, but a risk manager cannot make good decisions without understanding it. Most executives and management have an intuitive notion of value that grounds itself in the systems and applications throughout the environment.

Value is not something that is universal or absolute. Our notion of value varies in the same way that the amount that individuals are willing to pay for Red Sox vs. Yankees tickets varies, or the amount businesses are willing to pay for other businesses varies. The most common ways to assess value in organizations is by looking at the ability to generate revenue or reduce costs.

Perhaps the easiest way to determine minimum value in our environment is to look at IT budgets. It stands to reason that our IT and information assets are worth at least as much as we are willing to spend on it, so a budget figure provides a minimum baseline to work with.

Transactions are where the risk is

The first thing we must acknowledge is that there are no incidents without activity, and therefore risk is implicit to the activity. A generic transaction occurs upon the completion of a source and target connection. At the network layer, these are flows. They can also be user sessions, application messages, database queries, or any other activity with that source and destination.

There is a key requirement for transactions included in the event set — they must have both desirable and undesirable outcomes within the realm of computer security. It is the undesirable outcomes, then, that provide the historical frequencies that may be useful in understanding risk. At the very least, they act as lagging indicators of risk for analysis and reporting.

Control measures provide security that reduces risk

Security professionals make decisions about applying controls based on consensus beliefs, regulatory requirements, and personal conclusions. As the sessions and transactions are occurring throughout an environment, we apply controls that evaluate their content and the context to assess whether they will result in a desirable or undesirable outcome. The controls then are programmed to make decisions that either allow or block the activity.

The rub with control measures is in the decisions made. There are four possible outcomes of a control application: true positive, true negative, false positive, and false negative. Our goal is to properly characterize every transaction, but practically speaking this is impossible. Therefore, managing false positives — which inhibit legitimate activity, increase our cost burden, and reduce productivity — and false negatives — which allow inappropriate activity, cause incidents and result in corresponding losses — is the primary function of any security unit.

Incidents help us measure success and failure

Every organization experiences incidents. Luckily, many are of the routine type and more nuisance than significant. They can occur as a result of a control failure — classified as a false negative in our control measures — or an omission.

It is sometimes challenging to identify false negatives, as reports indicate that attackers can operate undetected for extended periods of time — weeks and months, sometimes more. That said, in the same way a tree falling in the woods with nobody around makes no (significant) noise, significant incidents are eventually identified and useful in assessing risk and strength of controls.

Costs and losses measure the consequences

The final test for any security program is to assess the damage. This damage comes in the form of costs (even security costs) and losses. After all, as a cost center every security program manager should be looking for ways to reduce costs without reducing effectiveness.

Historically, losses associated with response and recovery make up the bulk of losses but that could just be because they are most easily quantified. In addition, direct costs such as legal fees and regulatory fines are straightforward as well. Lost revenue calculations become a bit more difficult but pioneering techniques are available for more mature organizations.

So the framework of IT and information asset value, transactions (events), controls, incidents, and losses provide a constructive yet flexible way to classify all activity across the network environment. Ratios can be created that ensure apples to apples comparisons. The information can be stratified by geography, business unit, technical platform, or other method and compared to each other over time to gain insight into the operating environment.

As this information develops, one can also consider approaches to mitigate and/or remediate the control environment for higher levels of efficiency and effectiveness:

“Last month’s activity has brought to light some opportunities for improvement. We revisited our policies associated with the 4 million blocked connections and determined that approximately 10,000 (.25 percent) should have been allowed and we made a configuration change to address the issue. In addition, the policy associated with the 1695 initially suspected connections were evaluated and changes to our security posture were made that should reduce these false positives by 50 percent. To address the 5 incidents, we have instituted remedial training for the individuals involved and instrumented the affected systems with new means for intrusion detection.”

“Are we secure?” is the trickiest question of all, but also the most important one. With the right framework and mindset, the security professional can answer the question by providing a level of strategic detail that is easy enough for non-technical folks to understand but doesn’t skimp on providing more detail to demonstrate that the answer itself may be a rolling one.

Pete Lindstrom is Research Director for Spire Security, an industry analyst firm providing analysis and research in the information security field. He has held similar industry analyst positions at Burton Group and Hurwitz Group.