The cybersecurity risk metrics market has exploded, and at least half a dozen companies are offering real time risk metrics for enterprises. Insurance carriers will collect upwards of $3 billion in premiums this year. In my recent analysis of this $20 billion market, \u00a0it was evident that the rise of adversaries, boardroom pressures and financial losses are driving a whole new world of underwriters, brokers and consultants. CISOs are now supposed to answer to the C level and the boardroom, somewhat challenging questions like:Are we secure? If so, just how secure are we?Could what happened to company xyz happen to us? Are we getting better over time?JP Morgan Chase just announced they will deploy $250 million in security. Are we spending enough? Should we spend more?While there is consensus that we need to measure risk, several of my security super friends told me that it\u2019s easier said than done. Richard Seiersen, vice president of Trust and CISO at Twilio, wants to simplify this debate. A soft spoken classically trained guitarist and co-author of the recently published book - \u201cHow to measure anything in Cybersecurity\u201d, Selersen advocates risk management using probabilistic thinking and probabilistic programming. He has spent two decades in a left-brained analytics-driven universe, most recently, as general manager of cybersecurity and privacy at GE Healthcare.\u00a0 Rich Seiersen, VP of Trust and CISO, TwilioI sat down with him at Twilio\u2019s offices in San Francisco to understand why odd sounding things like Monte Carlo Simulations, Bayesian Analysis and forecasting with small data are critical for managing cyber risk.Is it really that difficult to measure cyber risk?Richard Seiersen: Risks have been measured in far more complex situations - flooding, droughts, military logistics and such. So yes, we can measure cyber risk. But it starts with the right perspective.\u00a0We surveyed 171 security professionals for statistics literacy, we came away with interesting findings. Leaders with favorable outlooks on predictive analytics scored significantly better on statistical literacy than their naysaying peers. Stat Literacy & AttitudeIn fact, many of the objections to quantitative approaches to cyber security are highly correlated with low stats literacy. Meaning, the objections to quant approaches to cyber risk aren\u2019t coming from a place of knowledge. The good news is that there is a strong appetite to adopt more scientific approaches in measuring risk. The challenge seems to be in the "how."My security team is already stretched super thin and overwhelmed. They should now take a course in statistics?Probabilistic thinking (logic) is a must! We have both uncertain sentient and artificially intelligent adversaries who exploit yet to be discovered or acknowledged vulnerabilities. The language franca of uncertainty is probability theory. \u00a0When used correctly, probability helps us retain our uncertainty and avoid excessive overconfidence. As Richard Feynman once said, \u201cThe first principle is that you must not fool yourself, and you are the easiest person to fool.\u201d We can be more scientific and mature. \u00a0And a basic understanding of probability can go a long way. Behavioral economist and Nobel Laureate Daniel Kahneman reminds us of the trainable nature that leads to probabilistic thinking. In fact you can calibrate your security team so that when they say they are 90 percent confident of a certain adversarial activity, they will approach being right 90 percent of the time. \u00a0But we don\u2019t have enough data? Where do we begin?That\u2019s another fallacy we live with - that we don\u2019t have enough data. My coauthor and measurement expert, Doug Hubbard, is famous for saying, \u201cWe have more data than we need and we actually need far less than we think.\u201d We could begin by simply making better bets \u2013 being more like bookies. I aspire to make my security team into probabilistic thinkers, aka "security bookies."\u00a0Bookies get near instantaneous, ongoing feedback on their predictions. This is the best form of calibration. If my security team can make better predictions for likelihood of risks, we are significantly better off.And note, you will make these bets anyway when you use unproven qualitative "High Medium Low" methods. That being said, our studies have shown that security professionals can still be ~20 percent inconsistent in their forecasts. Fortunately, we can reduce this inconsistency by building models that outperform the experts. We call this, "making security robots."Once my security team starts to assign probability \/ likelihood to certain risks, how do we build upon this?Two paths can emerge. First of all, develop a thinking for \u201creturn on control\u201d. For example, we see in the table that Risk 1 has a impact of anywhere from $2 million to $40 million. We could then start to think about costs (a) if we can effectively mitigate these risks and (b) costs to mitigate these risks.How best should I think of budgeting for my security spend?Your security budget is an expression of your risk tolerance. By spending less, we are merely stating that we are open to taking a higher level of risk. What should we mitigate ? What should we track ? Obviously, we cannot tackle everything nor do we have an infinite budget.In the following \u201cmake believe\u201d table, we can look at various risks and prioritize them based on our particular situation. For example, if we hold sensitive PII data, the losses could have wider implications. By controlling \/ mitigating DB access, we can achieve a very high return on control. And we can choose to budget for \/ spend only on the risks that have a very high return. \u00a0Another way to think about risk is along a curve of outcomes. This can help anticipate your expected total loss and hence, plan your budget. \u00a0In the long run, we should aspire to build expert systems to improve our probabilistic outcomes. We need to build models to retain knowledge and reduce error. And over time, we encode such models using probabilistic programming. These approaches model, scale and preserve the expert's opinion as opposed to obscuring it in a 1-10 or High-Middle-Low (HML) scale.How can such models be developed and encoded?That\u2019s the promise of Artificial Intelligence (AI). The most rudimentary models often outperform the best subject matter experts. Simply put, we first develop a data driven model that encodes our collective intelligence about a given risk. That is what we have done to the right. But we can go even further, allowing these models to learn from machine data coming from public (OSINT) and internal sources. This evolves into an amalgam of both decision and data science. While rudimentary, we are starting to build \u201crisk robots\u201d, or perhaps more appropriately \u201csecurity cyborgs.\u201d We say cyborgs because the human element is retained. In some cases, this system \u00a0can take actions on our behalf \u2013 just like expert systems. This is what we call in the book \u201cPrescriptive Analytics\u201d \u00a0and that\u2019s the future.Any parting thoughts?We are na\u00efve. We can do better.