Are your security tools secure? It all depends

While security vendors typically put a lot effort into making sure their code is secure, the dependencies they need to run may be the weak link.

Blockchain gear and chain
Thinkstock

I’ve used a plethora of security tools over my career, and if you’re reading this, chances are you have, too. As technology became more and more prevalent in every organization on the planet, the need to protect the critical data and infrastructure from outages, external threats, accidents and even employee activity required a broader set of programs to counter each vulnerability. The security companies who develop these kinds of tools are typically run by fellow security professionals who take these matters very seriously, and ensure that the code their company writes that will be secure.

Now, anyone who’s worked in the IT field for any length of time can probably name right off the top of their head an instance or two of when a security tool had a bug or introduced some other type of vulnerability. But these outliers aside, most security software available today is reliable, coded securely, and supports most hardening techniques natively.

It’s in this diligence in dealing with their own code and ensuring its reliability and interoperability that many of these companies inadvertently introduce vulnerabilities. Not through their own code, but on other programs and code that their tools rely upon to work properly. These dependencies can range from runtime environments like Microsoft’s .NET Framework or the Java Runtime Environment (JRE) to operating system dependencies of kernel level .dll’s or other system libraries.

Where this gets tricky is when these security tools need libraries or runtime environments that would not be installed or needed on the host system if the security tool itself wasn’t installed. If, as an example, a vulnerability is discovered in one of these dependencies, the security tool company likely has no control over patching it or otherwise fixing the vulnerability as it belongs to another vendor. And while they can claim that their own software is secure, the fact that their tool has required a vulnerable piece of code to be present still introduces a vulnerability to an organization’s systems that will have to be addressed.

It’s here where these dependencies can be incredibly hard to mitigate, as it may require the outright removal of the security tool in question in order to remove the offending run-time environment or libraries. Several customers I’ve worked with over the course of my career have dealt with this exact problem, and become somewhat stuck from a risk perspective, as they want the benefits the security tool provides, but are unable to mitigate, patch or remove the vulnerability in the dependency software. It starts a long road with both vendors to try and find a solution, which can be expensive from both a time and cost perspective.

So, what do you do about it? The key is to try and get ahead of this problem as much as possible and ask these kinds of questions of your security vendor during the evaluation process and before you purchase their software. Ask things like:

  • What additional software must be present on my servers or workstations in order for your tools to run properly?
  • Does your tool support the most current version of the dependent software? If not, when do you anticipate releasing a version that does?
  • What sort of relationship does your development team have with the vendors who write the runtime environments or other software?

On top of asking these questions during the evaluation process, you should also engage your legal and procurement departments to consider adding liability clauses  to your agreement with the security company holding them liable for any vulnerabilities, weaknesses, or other problems into your environment either from their own software or any software which is required to be installed in order for their tools to work.

This may be a sticking point for a lot of companies, as again, they have no direct control over these kinds of dependencies, but they are still forcing this software to be present when it could instead be removed from your environment and not present a vulnerability. And while I am not a lawyer, I have successfully negotiated these kinds of terms into vendor agreements, and it can serve as a sort of insurance policy in the long run should the security software’s dependencies be shown to be the avenue that a hacker used to get into your network.

Like everything else, there’s no single foolproof way to mitigate these issues, but being aware of the potential dependencies a new software tool you’re procuring may introduce to your environment can put you way ahead of the game.

This article is published as part of the IDG Contributor Network. Want to Join?

Get the best of CSO ... delivered. Sign up for our FREE email newsletters!