A Field Guide to Spotting Bad Cryptography

It takes an expert to determine whether a cryptographic system is truly secure, but CSOs can learn to spot red flags

To determine if a cryptographic protocol or system is actually secure takes an expert. And, even then, hidden flaws may be lurking.

Cryptography is the collection of techniques used to protect information from unauthorized disclosure or modification. These techniques are the basis of the secure sockets layer, or SSL, protocol used to secure e-commerce transactions over the Web, as well as digital signature schemes that make it possible for video game consoles to tell the difference between a game thats authorized and one thats not.

Although cryptography was originally the stuff of spooks and diplomats, it is becoming more important every year as other strategies for protecting information increasingly show their limits. For example, it was once possible to prevent electronic documents from getting into the wrong hands by keeping them on a computer that was not connected to a network. These days, its nearly impossible to keep a computer off the network, and even if you could, there is always a chance a document might leak out on somebodys USB memory stick. Enter cryptography in the form of digital rights management systems, which keep documents in their encrypted form and only release the decryption key when a document is being accessed by an authorized individual.

The problem with cryptography is that it is downright difficult to tell the difference between a system that is actually secure and one that merely provides the appearance of security. Case in point, those bicycle locks with the cylindrical keys that were used for more than 20 years before thieves realized locks could be picked with a ballpoint pen. There are probably thousands of unknown security flaws lurking in a popular PC software.

Fortunately, the converse is generally not true: Its relatively easy to look at a crypto system and know if it is probably not secure. Thats because there are a few red flags that usually indicate something inside is not kosher. These warning signs wont tell you for sure that a system is hopeless, but they will tell you further research is warranted. Red Flag #1: Keys That Are Too Small The security of most cryptographic systems is based in part on the secrecy of its key. If an attacker can try every possible key and know for sure when he has found the correct one, the attacker can compromise the system. This is known as a brute force attack.

Keys are binary strings of 1s and 0s with a length thats almost always fixed. As with digits in a phone number, more bits means that there are more potential combinations for authorized users to choose from, and therefore more possible keys that an attacker needs to go through to try to find the one thats correct.

There are two kinds of encryption algorithms: symmetric algorithms, like data encryption standard and advanced encryption standard, and public-key algorithms like RSA and Diffie-Hellman. Generally speaking, symmetric keys that are shorter than 128 bits are not considered secure and should not be used. Likewise, you should not use RSA keys that are shorter than 1,024 bits.

When the 802.11 wireless equivalent privacy (WEP) standard was released in the 1990s, the standard called for 40-bit encryption. Even before the first attacks against WEP were publicly disclosed, I was telling my clients not to trust WEP because the key was simply not long enough to ensure security. Since then, numerous other vulnerabilities have been discovered as well.

Red Flag #2:

Keys That Are Too Long

The U.S. governments advanced encryption standard (AES) supports keys that are 128, 192 and 256 bits long. If longer keys are more secure, then why stop at 256 bits? Wouldnt a 512 or 1,024-bit symmetric key be more secure still?

Surprisingly, the answer to this question is usually no. Given the limits of computers as we understand them, there is no reason to think that a 192-bit or 256-bit symmetric key will be any stronger than a 128-bit key for the foreseeable future. Thats because even the fastest computers mankind is likely to build within the next two or three decades will be unable to try all possible 128-bit keys to crack an encrypted message with a brute force attack, let alone all 192-bit or 256-bit keys. Although the additional bits confer more theoretical security, that additional security is meaningless.

Nevertheless, there has been a steady pressure on technologists to adopt longer and longer keys. Part of this pressure comes from history: In the 1990s, there were many cases in which successively longer keys were cracked by computer scientists. What people forget is that the industry at the time was using unreasonably short keys as a result of federal regulationregulations that have since been lifted. Unfortunately, the experience of the 1990s wrongly taught some technologists that key lengths need to be increased every few years. Another part of the push for longer keys is unbridled marketing: Longer keys just sound more secure than shorter ones, even if the security isnt relevant for computers likely to be manufactured in the 21st century. I suspect that its harder to sell a 128-bit encryptor when your competition is selling a spiffy something with 256 bits.

Nevertheless, you should be suspicious if a vendor tells you that it is selling something with 256-bit encryption because 128-bits is not secure. You should be especially suspicious if someone tells you that he is using 448-bit encryption or 10,000-bit encryption. This usually means the vendors salesman doesnt understand what he is talking about.

Red Flag #3:

Proprietary Algorithms

Related to the red flag of suspiciously long keys is the red flag of proprietary encryption algorithms. Cryptography research-ers have spent decades developing encryption standards like AES, triple DES and RSA that are considered good enough for the most sensitive information. Generally, there is no reason to consider using anything other than a published standard encryption algorithm.

Experience has shown that secret, proprietary algorithms are rarely as strong as encryption algorithms that have been published and publicly analyzed. A basic tenet of modern cryptography is that the entire security of an encrypted message should rest with the encryption key, not with the encryption algorithm. Thats because its nearly impossible in todays world to keep an algorithm secret: An attacker can always obtain a copy of your program, reverse-engineer it and learn the encryption algorithm thats in use.

Usually algorithms that are purportedly secret can make that claim only because nobody has been suitably motivated to figure out how they work. One of the best examples was the closely guarded DVD encryption algorithm used for preventing consumers from making unauthorized copies of DVDs. This algorithm was widely adopted by the consumer entertainment industry, put into tens of millions of DVD players, and cracked by a high school student.

So why do vendors sometimes develop secret algorithms and try to get customers to buy them? Sometimes it is because the vendor didnt have a handle on its software development process: Perhaps a programmer thought that it would be fun to write a new encryption algorithm rather than use one of the standards. Other times it is because the company is trying to cut costs. Most frequently, though, its because the people who were charged with developing the cryptographic system fundamentally didnt understand cryptography in practice.

Red Flag #4:

Keys That Cant Be Changed

Since the security of an encryption system depends on the key, there should be a way to change a key if it is compromised. Many commercial systems use a small number of fixed and un-changeable encryption keys to protect their data. Once again, the best known of these systems was the DVD encryption system: Although the industry imagined it would simply change the decryption keys for future DVDs if the system was compromised, after the break, the industry discovered that there was no way to upgrade all of those DVD players in the field. Whoops.

Evaluate the Options

With this simple list of red flags you can start evaluating the various charlatans and hucksters who come into your office trying to sell you their cryptography paraphernalia. But be careful: A little knowledge can be dangerous if it is misapplied.

For example, an interesting area of research in secure computing today involves devices that use a physical unclonable function (PUF). These de-vices implement a fingerprint for computer systemsan identity that cant be changed. Although this seems to violate Red Flag #4, the identity also cant be copied, so PUFs are thought to be reasonably secure.

On the other hand, if you meet a new vendor who has a security gizmo that will encrypt laptop hard drives using a secret high-performance encryption algorithm with an 822-bit encryption key thats stronger than anything allowed by the U.S. Government, now youll know enough to stay clear.

Related:
SUBSCRIBE! Get the best of CSO delivered to your email inbox.