Americas

  • United States

Asia

Oceania

roger_grimes
Columnist

Does your cyber insurance cover social engineering? Read the fine print

Feature
May 15, 20196 mins
Risk ManagementSecuritySocial Engineering

Some cyber insurance policies will pay only a small fraction of damages if an attacker used social engineering. Here's how to estimate the risk.

CSO > Invalidated cyber insurance
Credit: jauhari1 / Getty Images

Cybersecurity insurance is quickly becoming a must-have risk offset for businesses of every size. Already one-third of U.S. businesses have cybersecurity insurance, and the market is expected to grow to $14 billion by 2022.

Insurance companies are making bank. In 2017, cybersecurity insurance carriers only paid out 32% of premiums, and this was a less than they paid out in the prior year (48%). The cost for most businesses is relatively low, usually just 1% to 3% of what businesses pay for other insurance coverages. Business leaders tell me that their cost for cybersecurity insurance ranges from $5,000 to $25,000 for multi-millions of dollars in coverage. It’s a small cost to pay for big coverage. Or is it?

What is a social engineering reduction clause?

I’m now hearing about big cybersecurity insurance policies that have “social engineering” reduction clauses. Essentially, if your organization experiences a cybersecurity incident, and it involves a social engineering attack vector, then the expected payout is reduced significantly from what is promised in the full policy. As an example, one city government told me they had a $50 million dollar cybersecurity insurance policy, but if a claim involved social engineering, then it only paid out a maximum of $200,000. (I’m assuming the deductible applies toward that figure as well.)

If your cybersecurity insurance policy includes such a clause, this is huge!

Most cybersecurity insurance policies (there are over 170 cybersecurity insurance vendors) don’t include this clause, but its appearance in some should have the whole cybersecurity world paying attention.

Social engineering attacks are involved in 70% to 90% of all successful data breaches. It can come in the form of a phishing email (most common), a malicious web page that instructs the user to install something, a Trojan horse email, or a fraudulent phone call. It’s hard to find a publicly known attack that didn’t involve some form of social engineering component, either as the primary initial vector or as part of the attack.

This means that the $50 million policy is likely to be, in reality, a $200,000 policy. Your potential damage is likely to be more than the smaller payment ceiling. How do I know? Because if you didn’t think there was a decent risk of having damages closer to the much higher ceiling, you wouldn’t contract for it. The question is what is the right insurance claim ceiling and does it make sense to allow a “claw down” if social engineering is involved?

I get why cybersecurity insurance companies want the social engineering clause. It’s  designed to save them huge amounts of money. Customers, who aren’t aware of the huge likelihood of social engineering attacks involved in most cybersecurity incidents, don’t realize that up to 90% of their coverage is being thrown away.

How to tell what coverage you’re really getting from cyber insurance

I don’t want you to take my word that social engineering is 70% to 90% of all successful malicious data breaches…and the vast majority of ransomware attacks (which is often what cybersecurity insurance companies pay for). The risk of social engineering attacks to your specific organization could be different from the global average.

You need to figure out how often a cybersecurity incident is successful in your environment due to social engineering versus the other types of attack vectors (e.g., unpatched software, eavesdropping, misconfiguration, user error, insider attack). Most organizations will probably agree that most of their major cybersecurity incidents were related to social engineering.

Where I see many companies make a mistake is in how they classify a cybersecurity incident. Most think that for a cyber compromise to be called a cybersecurity incident, that it has to be major and cause significant damage. The reality is that anytime a hacker or malware gets past your defenses, even if only for a few minutes, that you have a cybersecurity incident on your hands. You just might not know about it.

I encourage organizations to look at every malware program that gets past their initial defenses. If your cybersecurity defenses stop all malware at the exact second it tries to get into your environment, that’s great. You have zero risk. If that same malware program dwells for minutes to days before it is detected (no anti-malware scanner is 100% accurate), then you’ve got a cybersecurity incident. It may or may not become something major and costly, but it is a hole in your defenses and an increased risk to your environment.

Start measuring dwell time — how long all malware is in your environment before being detected and removed. You can easily capture how long any malware program is in your environment before it is detected and removed by using an application control program in audit-only mode and then comparing detection times and dates to the first time the malicious program was executed. This won’t capture all maliciousness in your environment, but it is a huge part of the risk in most environments.

When you find malware that dwelled for longer than a few minutes in your environment before it was detected and removed, try to figure out how that malware got into your environment. Was it social engineering, unpatched software or some other attack vector? Then calculate how big of a percentage social engineering was compared to the others.

Most organizations will find that social engineering was responsible for most successful defense bypasses. That’s because most malware infections don’t lead to full-blown data breaches, but they could.

Then take the percentage of successful social engineering attacks in your environment (as compared to the other attack vectors) and use it to calculate what your cybersecurity insurance really equates to. Here’s an example:

Suppose you have a $50 million cybersecurity incident policy with a $200,000 ceiling for social engineering and social engineering is responsible for 90% of your cybersecurity incidents. The risk-based payout would look something like this. $50,000,000 x 10% (the amount not due to social engineering) plus $200,000 x 90% (for the likely social engineering payouts). This gives $5,000,000 plus $180,000, which equals $5,180,000.

So, whatever amount you are paying for that cybersecurity incident insurance has a risk-adjusted payout ceiling of $5.18 million, not $50 million. The premiums you are paying for that risk-adjusted basis are still far less than you would receive in a payout, but it’s far less than what the policy looks like at total face value.

Cybersecurity insurance organizations are providing a valuable, needed service. We are all adults and nothing the insurance industry is doing is unethical or illegal. They are putting everything they are committing to in legal writing. Perhaps the policies with the “social engineering” reduction clauses are counting on customers not being aware of how often social engineering is involved in cybersecurity incidents.

If I were a business leader buying cybersecurity incident insurance, there is no way I would allow this type of clause to exist in a policy I signed and paid for. It would be like buying a fire insurance policy where electrical fires were excluded. You could do it, but it doesn’t make a lot of cents [sic].  

roger_grimes
Columnist

Roger A. Grimes is a contributing editor. Roger holds more than 40 computer certifications and has authored ten books on computer security. He has been fighting malware and malicious hackers since 1987, beginning with disassembling early DOS viruses. He specializes in protecting host computers from hackers and malware, and consults to companies from the Fortune 100 to small businesses. A frequent industry speaker and educator, Roger currently works for KnowBe4 as the Data-Driven Defense Evangelist and is the author of Cryptography Apocalypse.

More from this author