The cybersecurity risk metrics market has exploded, and at least half a dozen companies are offering real time risk metrics for enterprises. Insurance carriers will collect upwards of $3 billion in premiums this year. In my recent analysis of this $20 billion market, it was evident that the rise of adversaries, boardroom pressures and financial losses are driving a whole new world of underwriters, brokers and consultants. CISOs are now supposed to answer to the C level and the boardroom, somewhat challenging questions like:
- Are we secure? If so, just how secure are we?
- Could what happened to company xyz happen to us? Are we getting better over time?
- JP Morgan Chase just announced they will deploy $250 million in security. Are we spending enough? Should we spend more?
While there is consensus that we need to measure risk, several of my security super friends told me that it’s easier said than done. Richard Seiersen, vice president of Trust and CISO at Twilio, wants to simplify this debate. A soft spoken classically trained guitarist and co-author of the recently published book - “How to measure anything in Cybersecurity”, Selersen advocates risk management using probabilistic thinking and probabilistic programming. He has spent two decades in a left-brained analytics-driven universe, most recently, as general manager of cybersecurity and privacy at GE Healthcare.
I sat down with him at Twilio’s offices in San Francisco to understand why odd sounding things like Monte Carlo Simulations, Bayesian Analysis and forecasting with small data are critical for managing cyber risk.
Is it really that difficult to measure cyber risk?
Richard Seiersen: Risks have been measured in far more complex situations - flooding, droughts, military logistics and such. So yes, we can measure cyber risk. But it starts with the right perspective.
We surveyed 171 security professionals for statistics literacy, we came away with interesting findings. Leaders with favorable outlooks on predictive analytics scored significantly better on statistical literacy than their naysaying peers.
In fact, many of the objections to quantitative approaches to cyber security are highly correlated with low stats literacy. Meaning, the objections to quant approaches to cyber risk aren’t coming from a place of knowledge. The good news is that there is a strong appetite to adopt more scientific approaches in measuring risk. The challenge seems to be in the "how."
My security team is already stretched super thin and overwhelmed. They should now take a course in statistics?
Probabilistic thinking (logic) is a must! We have both uncertain sentient and artificially intelligent adversaries who exploit yet to be discovered or acknowledged vulnerabilities. The language franca of uncertainty is probability theory. When used correctly, probability helps us retain our uncertainty and avoid excessive overconfidence. As Richard Feynman once said, “The first principle is that you must not fool yourself, and you are the easiest person to fool.” We can be more scientific and mature.
And a basic understanding of probability can go a long way. Behavioral economist and Nobel Laureate Daniel Kahneman reminds us of the trainable nature that leads to probabilistic thinking. In fact you can calibrate your security team so that when they say they are 90 percent confident of a certain adversarial activity, they will approach being right 90 percent of the time.
But we don’t have enough data? Where do we begin?
That’s another fallacy we live with - that we don’t have enough data. My coauthor and measurement expert, Doug Hubbard, is famous for saying, “We have more data than we need and we actually need far less than we think.” We could begin by simply making better bets – being more like bookies. I aspire to make my security team into probabilistic thinkers, aka "security bookies." Bookies get near instantaneous, ongoing feedback on their predictions. This is the best form of calibration. If my security team can make better predictions for likelihood of risks, we are significantly better off.
And note, you will make these bets anyway when you use unproven qualitative "High Medium Low" methods. That being said, our studies have shown that security professionals can still be ~20 percent inconsistent in their forecasts. Fortunately, we can reduce this inconsistency by building models that outperform the experts. We call this, "making security robots."
Once my security team starts to assign probability / likelihood to certain risks, how do we build upon this?
Two paths can emerge. First of all, develop a thinking for “return on control”. For example, we see in the table that Risk 1 has a impact of anywhere from $2 million to $40 million. We could then start to think about costs (a) if we can effectively mitigate these risks and (b) costs to mitigate these risks.
How best should I think of budgeting for my security spend?
Your security budget is an expression of your risk tolerance. By spending less, we are merely stating that we are open to taking a higher level of risk. What should we mitigate ? What should we track ? Obviously, we cannot tackle everything nor do we have an infinite budget.
In the following “make believe” table, we can look at various risks and prioritize them based on our particular situation. For example, if we hold sensitive PII data, the losses could have wider implications. By controlling / mitigating DB access, we can achieve a very high return on control. And we can choose to budget for / spend only on the risks that have a very high return.
Another way to think about risk is along a curve of outcomes. This can help anticipate your expected total loss and hence, plan your budget.
In the long run, we should aspire to build expert systems to improve our probabilistic outcomes. We need to build models to retain knowledge and reduce error. And over time, we encode such models using probabilistic programming. These approaches model, scale and preserve the expert's opinion as opposed to obscuring it in a 1-10 or High-Middle-Low (HML) scale.
How can such models be developed and encoded?
That’s the promise of Artificial Intelligence (AI). The most rudimentary models often outperform the best subject matter experts. Simply put, we first develop a data driven model that encodes our collective intelligence about a given risk. That is what we have done to the right. But we can go even further, allowing these models to learn from machine data coming from public (OSINT) and internal sources. This evolves into an amalgam of both decision and data science.
While rudimentary, we are starting to build “risk robots”, or perhaps more appropriately “security cyborgs.” We say cyborgs because the human element is retained. In some cases, this system can take actions on our behalf – just like expert systems. This is what we call in the book “Prescriptive Analytics” and that’s the future.
Any parting thoughts?
We are naïve. We can do better.
This article is published as part of the IDG Contributor Network. Want to Join?