Americas

  • United States

Asia

Oceania

roger_grimes
Columnist

10 risk factors no one talks about

Feature
Oct 17, 201910 mins
Risk ManagementSecurity

These risk factors might not show up on an official risk assessment report, but every security professional should be thinking about them.

A shoe about to step on a banana peel, stopped by a small superhero.
Credit: RetroRocket / Getty Images

The traditional risk management factors you are all taught include the staid process of categorizing potential threats and risks, evaluating their likelihood of occurrence, and estimating the damage that would result from them if not mitigated. The costs of the potential mitigations and controls are measured against the potential damage. Mitigations are put in place if they are cheaper and better to implement than allowing the risks and threats to occur.

You have all fretted about the difficulty of calculating both the likelihood of an event and its potential damages. They have always been more like a best guess than an insurance actuarial table. How can anyone estimate the chances that a sophisticated ransomware, DDoS or insider attack will occur to their organization in a given year or what assets it might be able to take out with any accuracy? Can anyone prove that likelihood is 20% versus 60% in a given year?

We all struggle with those large estimation issues, but there are a ton of other factors that impact risk management. Here are ten that are rarely discussed openly.

1. Fighting over “might happen” risk

Every risk assessment is a fight between something that might happen and doing nothing, especially if it hasn’t happened before. Many people believe it’s cheaper to do nothing, and those who fight to do something might be seen as wasting money. “Why waste the money? That’s never going to happen!”

Few people get in trouble for following the status quo and doing what has always been done. It’s far harder to push to be proactive, especially when large sums of money are involved, than to just wait for the damage to happen and address it then.

The story I like to use is 9/11 and air travel safety. It’s not like air travel safety experts didn’t already know before 9/11/01 that a hijacker could take over a cockpit using a boxcutter or smuggle explosives onto a plane. These risks had been known for decades. Imagine the public outcry if passengers were made to throw out their water bottles and get full body scans before 9/11 happened. It would have outraged the public and the airlines would have proactively tried to get rid of the security measures.

After 9/11, we happily take off our shoes, throw away our water bottles, and subject ourselves to full-body scans. Getting real money to fight possible risks is much harder to do than to get the money after the damage has happened. It takes real bravery every time a risk assessor warns about a problem that has never ever happened. They are the unsung heroes.

2. Political risk

Proactive risk-taking leads to the next unknown risk component: political risk. Every time proactive heroes argue for something that never happens, they lose a little bit of their political capital. The only time they win is when the thing they were proactive about happens. If they are successful and convince the company to put controls and mitigations in place so the bad thing never happens, well, it never happens.

It’s a self-defeating prophecy. When they win, no one ever knows because they successfully argued for the controls. So, each time the thing they worried about never happens, they are seen as “crying wolf.” They lose political capital.

Anyone who has fought one of these risk management battles can tell you they don’t want to take on too many of them. Each one taken burns their reputation a bit (or a lot). So, proactive warriors calculate which battles they want to fight. Over time, seasoned warriors pick fewer battles. They have to. It’s survival of the fittest. Many of them are just waiting for the day when a really bad thing happens that they didn’t fight to prevent hurts the organization and they become scapegoats.

3. “We say it’s done, but not really” risk

Many of the controls and mitigations we say we have done aren’t really done…at least not at 100%. Many people in the process understand it’s not really done. The most common examples are patching and backups. Most companies I know say they are 99% to 100% patched. In my over 30-year career of checking on the patch status of millions of devices, I’ve never found one that was truly fully patched. Yet, every company I’ve audited told me they were fully patched or nearly so.

The same is true of backups. The current ransomware epidemic has laid bare that most organizations don’t do good backups. Despite most organizations and their auditors checking off for years that critical backups are both done and are regularly tested, it just takes one big ransomware hit to show how radically different the truth is.

Everyone in risk management knows this. How can a person who is in charge of backups ever test everything when they aren’t given the time and resources to do so? To test if a backup and restore would work, you would have to do a test restore of many different systems, all at once, into a separate environment where it would have to work (even though all the resources are pointing in the original environment). That takes a huge commitment of people, time, and other resources, and most organizations don’t give the responsible person any of that for the task.

4. Institutionalized risk: “It’s always been done that way”

It’s hard to argue against “that’s the way we’ve always done it,” especially when no attacks against the weakness have occurred for decades. For example, I frequently come across organizations that allow passwords to be six-characters long and never changed. Sometimes it’s that way because the PC network passwords have to be the same as the passwords connecting to some archaic “big iron” system that the company depends on. Everyone might know that six-character, non-changing passwords are not a good idea, but it’s never caused any problems.

Good luck arguing that everything needs to be upgraded to support longer and more complex passwords, possibly spending millions of dollars, The institutional “wisdom” is against you, and most of those people have been there way longer than you.

5. Operational interruption risk

Every control and mitigation you implement might cause an operational issue. It might even disrupt operations. You are far more likely to get fired for accidentally disrupting operations than for proactively preventing some theoretical risk. For every control and mitigation that you push, you worry about the potential operational interruption it will cause.

The more radical the control, the more likely it is to mitigate every bit of the risk of the threat it is fighting, but the more suspicious you are that it can do so without operational interruption. If mitigating risks without causing operational interruption were easy, everyone would be doing it.

6. Employee dissatisfaction risk

No risk manager wants to make employees angry. If you want to do so, implement any control that restricts where they can go on the internet and what they can do on their computer. End users are responsible for 70% to 90% of all malicious data breaches (through phishing and social engineering). You cannot trust end users’ instincts to protect the organization.

Yet the mere mention of restrictions on what end users can do, such allowing only pre-approved programs to run or restricting where and what they can do on the internet, is met by hostility from most employees. The labor market is tight. Every company is struggling to get good employees, who don’t want to be told they can’t do whatever they want to do on “their” computer. You lock it down too much and they might go work somewhere else.

7. Customer dissatisfaction risk

No one wants to implement a policy or procedure that turns customers off. Upset customers become other companies’ happy customers. For example, credit card companies are far more concerned with accidentally denying a legitimate customer a legitimate transaction than in stopping fraud. They care about fraud, but it’s at a level they feel is long-term sustainable. The subcontractors and companies that make credit card transactions more accurate sell their services to the credit card companies on how well they don’t deny legitimate transactions. Customers wrongly denied twice in a year will use someone else’s credit card.

It’s also why you don’t need to use a PIN with a chipped card in the US. The rest of the world requires both the chip and a PIN, and this is a more secure option by far. How did it get that way? Because PIN and chip cards came to the US relatively recently, and merchants and customers were just getting used to swiping cards. Requiring people to insert the card so that the chip was read correctly was going to make a small percentage of transactions fail and upset some customers. 

8. Cutting edge risk

People on the cutting edge often get cut. No one wants to be on the pointy tip of the spear. Early adopters are rarely rewarded for being early. They often become the lessons learned that make it easier for the herd to adopt improved tactics.

Two years ago, the US National Standards and Technology (NIST) said that its long-standing password policy of requiring long and complex passwords that are frequently changed caused more hacking than it prevented. Its new Digital Identity Guidelines, NIST Special Publication 800-63-3, says passwords can be short, non-complex, and never have forced password changes unless you know the passwords have been compromised. It was a complete 180-degree turn from the previous advice that was accepted as dogma.

Since then, no compliance guideline or regulatory law has been updated to say that following the new advice is recommended or legal. I haven’t seen or heard of any companies moving to the new policies. That’s probably a good thing, because if you changed your policy and got hacked because of it, even if NIST said it was the right thing to do, fingers will be pointed at you asking why you did it. It’s much safer to wait for the herd to move to the new password policies and they are proven right or wrong.

9. Time lag risk

You are almost always fighting some risk that has already happened to other people (or to your organization). You wait to see what tricks the hackers have up their sleeves and then create mitigations and controls to fight those new risks. Having to first wait to see what the hackers are doing makes a time lag from when the new malicious behavior is spotted until you can assess the new technique, think of new controls, and push them out. In a wait-and-see game, you are always behind.

10. “Can’t do everything right” risk

Last year more than 16,555 new public vulnerabilities were announced. More than 100 million unique malware programs are known. Every type of hacker from nation-states to financial thieves to script kiddies are trying to break into your organization. It’s a lot to worry about. You have no way to defend against it all unless someone gives you an unlimited amount of money, time and resources. The best you can do is guess (see #1 above) what are the most important risks and try to stop them.

These are not new components of risk assessment. They have always been there, and they are what you are all thinking about when assessing risk and thinking of controls. It all points to the fact that risk assessment and risk management are far harder to do than it seems, especially on paper or from formal theory in a book. When you consider all the things the average computer security person has to worry about and contemplate, it’s amazing that we can actually get it right most of the time.

Now go out there and continue to fight the good fight!

roger_grimes
Columnist

Roger A. Grimes is a contributing editor. Roger holds more than 40 computer certifications and has authored ten books on computer security. He has been fighting malware and malicious hackers since 1987, beginning with disassembling early DOS viruses. He specializes in protecting host computers from hackers and malware, and consults to companies from the Fortune 100 to small businesses. A frequent industry speaker and educator, Roger currently works for KnowBe4 as the Data-Driven Defense Evangelist and is the author of Cryptography Apocalypse.

More from this author