Security vs. visibility: Why TLS 1.3 has data center admins worried

What's good for the internet may be bad for the enterprise.

fog visibility island

The TLS protocol, which provides the encryption that makes secure web transactions possible, is in dire need of an upgrade. It last saw a big update when version 1.2 was released nearly ten years ago, and the Internet Engineering Task Force (IETF) has for some time been shepherding a major revision of the standard to fruition, which aims to offer improved security by, among other things, jettisoning support for legacy encryption systems.

There's only one snag: a number of data center administrators from large financial, health care and retail corporations have begun to regard the current draft of the 1.3 version of the protocol with increasing alarm. One of the key exchange mechanisms bounced from the draft standard, static RSA, is a crucial tool for admins who want to monitor and troubleshoot data traffic within a company's network. "I think there may be enterprises that don’t realize that this is going to hit them," says Nalini Elkins, President of the Enterprise Data Center Operators (EDCO) consortium. "They’re going to upgrade and things are going to go blind.  They’re going to have outages that they can’t fix and security tools that go dark."

While attempts on a fix are underway, it's worth taking a look at how the community got to this point. It's a story of clashing cultures, differing priorities, and the sometimes convoluted paths by which technical standards make their way to production.

The comforts (and problems) of the status quo

At the core of the TLS protocol is a series of exchanges of cryptographic keys between communicating computer systems, which allow them to talk to each other securely. Static RSA key exchange is one of these, and it's been much loved by data center admins for the visibility it offers into their networks. As Elkins puts it, "In today’s environment, the traffic coming in through the internet is encrypted, and it’s been NAT'd by the content delivery network so we have no way to even find a failing session. We need to get a user name, the URL where he's trying to go, and the time that the failure happened. And we can see that in a packet if we can decrypt it. But if we can’t decrypt the packet, we’re completely blind for troubleshooting."

The advantage of static RSA key exchange is that it allows for that kind of decryption, and for it to take place out-of-band — that is, the packets can be decrypted and inspected by tools that aren't in the main flow of network traffic, which means the various inspection tools can do their work without adding latency that can grind the system to a halt. Plus, these types of tools go beyond troubleshooting to include customer experience monitoring and intrusion and malware detection as well — packets can be traced and decrypted anywhere in the data center.

So why get rid of static RSA key exchange? Well, the problem is that much of what makes it so useful also makes it insecure. The capabilities for out-of-band decryption, which admins have come to rely on for network monitoring, can be abused to snoop on packets on the open internet. There are also potential vulnerabilities to the RSA key mechanism, including ROBOT and key theft. However, if private keys are used inside a data center and not on the internet, then a malicious hacker would have to penetrate that data center and steal both the packets and the RSA private keys to exploit those weaknesses.

To continue reading this article register now

22 cybersecurity myths organizations need to stop believing in 2022