Preparing your business for the quantum security threat, part 2

Once quantum computing takes off, will it even be possible to keep all of our data sources safe?

With an increasing amount of research and development being funnelled into quantum computing, organizations of all types are attempting to strategize how to use it and benefit from the technology. However, it’s hard to formulate a calculated plan when we’re unsure when in fact quantum computers will arrive.

In my first piece, I outlined how security threats and complications will come as quantum computing grows. With the progression of the Internet of Things (IoT), there is much to consider on that front as well. Data creation and data management are side effects of IoT, and confidentiality and protection will remain a priority for both individuals and organizations. The question is, once quantum computing takes off, will we be able to keep all our data sources safe?

Firmware and the Internet of Things

The lifespan of IoT devices introduces additional complexity. Some devices, such as sensors embedded in infrastructure, will operate for decades without human interaction. Quantum computing will likely come of age during their lifespan, meaning their threat model must consider post-quantum capabilities.

The main challenge is securing the update of firmware and software in deployed devices. Typically, this is solved by burning a master key into the device, which validates the bootloader code. The bootloader then validates the device payload, which contains the application logic. A remote update of one of these layers would need to be correctly signed to be accepted by the layer beneath.

Therein lies the issue. If the signature mechanism is not quantum-resistant, the device may be susceptible to malicious updates from future attackers with quantum capabilities. Nation-states could launch attacks to disrupt an enemy’s electrical grid or target an individual’s pacemaker, for example.

To defend against this, IoT manufacturers must build crypto-agility into their bootloaders, so signature algorithms can be updated over time. Additionally, the master keys in forthcoming devices should be updated to a post-quantum algorithm, to prevent attacks on the bootloader itself.


Confidentiality is generally achieved by encrypting data with symmetric algorithms, such as AES. We’ve already discussed how increasing the length of these keys will prevent quantum attacks against static encrypted data; however, there is another dimension – data in motion.

When two parties exchange confidential information across a network, they need a symmetric key to encrypt the traffic. This key is typically shared with key exchange algorithms such as Diffie-Hellman, which ensure the key can only be read by the holder of the recipient’s private key. This concept is used today in a wide range of applications, including TLS for web communication and SSH for server-to-server sessions.

The key exchange messages can be recorded by a patient attacker, who waits for the advent of scale quantum computing before deciphering the exchange, recovering the symmetric key and reading the transmitted data. For most businesses, this type of attack is unlikely to pose a major threat; little of the data shared today will be valuable in 10 to 20 years, when an attacker might plausibly access it. However, in niche circumstances such as communications with an embedded undercover agent, the risk may be real.

Such long-term confidentiality needs could be addressed by migrating tools and processes to use quantum-safe algorithms. This would ensure the symmetric keys remain secure in the decades to come. Alternatively, advanced approaches such as quantum key distribution (QKD) could be considered for sharing keys over short distances. QKD recently caused a stir in the press when Chinese scientists succeeded in sharing keys securely from space (PDF), which represents the first steps towards sharing keys globally using quantum techniques.

The future looks bright

Research into quantum-safe cryptography has been underway since the mid-2000s, with promising schemes emerging for digital signatures and key exchange. Several families of algorithms are thought to be resistant. Of these, lattice-based algorithms appear to offer the most favorable key exchange schemes, while the jury is still out on digital signatures. For instance, hash-based signatures are thought to be secure, but in practice they behave differently to classical counterparts and are complex to implement. Code-based signatures are also considered secure, but they require keys of an unusually large size and produce very big signatures.

Google made headlines in 2016 by running a trial of CECPQ1, a hybrid algorithm combining classical elliptic-curve cryptography with a lattice-based key exchange method dubbed New Hope. For several months, the experimental strain of Chrome used CECPQ1 during TLS negotiation with Google-owned services. The goals of the experiment were to encourage further quantum cryptanalysis and to test the feasibility of deploying new cipher suites on the internet. In the latter case, they were pleased with the results.

Since then, the NIST has picked up the baton by launching a project to analyze and standardize post-quantum algorithms. A similar approach developed the AES and DSA algorithms, so the world watches with interest. Draft standards for the winners are expected around 2023 to 2025.

My team has experimented with adding post-quantum candidate algorithms into several open source security libraries. Our experience has proved it is relatively painless to introduce these new primitives and the performance implications are negligible. We’ve recently released the first of our work on mbedTLS in a Github fork for interested parties to experiment with.

The future looks bright in the fight against quantum adversaries, but for now we must wait to see which of the promising schemes emerges triumphant.

This article is published as part of the IDG Contributor Network. Want to Join?

SUBSCRIBE! Get the best of CSO delivered to your email inbox.