Americas

  • United States

Asia

Oceania

sethhallem
Contributor

In the aftermath of yet another Meltdown, no secrets are safe

Opinion
Jan 09, 20185 mins
Application SecurityData and Information SecurityNetwork Security

Meltdown and Spectre reveal that perfect information protection comes at an increasingly steep cost.

meltdown spectre
Credit: Project Zero

In the field of data security, 2018 began with a jolt. The revelation of the Meltdown and Spectre security vulnerabilities has taught us that in 2018 (and beyond), nothing is sacred.

Speculative execution, the architectural concept that is exploited in the Spectre vulnerability, has been in use by mainframe processors since the mid-1970s. It is taught in Computer Architecture 101 in universities around the world. And yet, it turns out that the security implications were never fully understood until about seven months ago.

Out-of-order execution, the culprit in the Meltdown vulnerability, is also a ubiquitous concept, although Meltdown is easily avoided with a better implementation of the concept.

While the details of these vulnerabilities are well covered, the lessons to be gleaned from such a vulnerability are, in my opinion, not what they seem. The ability to identify vulnerabilities has reached a point where fundamental and necessary architectural innovation (like speculative execution) are proving to be vulnerable to information leaks.

The unavoidable truth here is that information theft can only be slowed – it cannot be stopped. There will always be the next way to steal sensitive data. It is inevitable.

The issue that we should tackle is the issue of information disclosure. Why is stealing a password via a surreptitious read of kernel memory sufficient to unlock valuable, sensitive data? Fundamentally there are two reasons:

  1. Software users are consistently asked to share very important secrets with systems beyond their control, and
  2. Access and authorization are conflated in almost all software systems.

Almost all access control today is built upon the concept of the password as the fundamental means of securely verifying user identity. However, the password, which should be a tightly held secret, must be shared with dozens of remote computing systems outside the user’s control on a daily basis.

If a password must be shared, it absolutely should not be a sufficient authentication factor to access sensitive data! By adopting an architectural precept that software can never store a discrete unit of sensitive data on a single system or in a single address space, a whole category of information disclosure vulnerabilities, both discovered and undiscovered, would be rendered far less interesting.

Authenticator apps, one-time authentication codes, and biometric authentication factors all accomplish exactly this – the password is one piece of the authentication puzzle, but it is not the whole puzzle!

The second piece of the authentication puzzle resides on a remote system that is in each individual user’s control. While each factor of authentication in isolation has weaknesses, in combination they largely prevent large scale hacks of shared systems.

For example, a system requiring a password that varies over time (as it is reset periodically) and biometric data (that can never change, even if it is stolen), has the strength that (a) multiple systems must be compromised to crack a single user’s account, and (b) the number of systems that must be compromised to crack a large number of accounts scales with the number of accounts targeted.

Were such concepts truly ubiquitous, stealing a password would lose some of its sizzle. Unfortunately, while websites may increasingly implement secure multi-factor authentication mechanisms, administrative console access to the systems hosting these websites are rarely protected in this way. The computing industry needs to embrace the idea that passwords will be stolen, and to discard once-and-for-all the idea that a password is a sufficient way to authenticate to any system or device. A similar concept applies to credit card numbers, social security numbers, and other secrets that are commonly shared in electronic transactions.

The second issue with information disclosure is that access and authorization are conflated in most systems. As a very simple example, if I guess (or steal) the password to your personal computer, I am immediately granted access to all of your files, some of which may contain credit card numbers, bank account numbers, social security numbers or other sensitive information.

The same concept applies in most corporate networks – authenticating into the network enables access to sensitive documents and data via single-sign-on to connected data storage systems (e.g., file shares, content management systems, etc.). However, passwords and networks in general are inherently vulnerable. Access must be separated from authorization – logging into a system cannot unlock unfettered access to all of the sensitive data connected to that system. Requiring a second factor of authentication at least once per session to unlock single-sign-on to sensitive systems is a reasonable, but not onerous requirement.

Meltdown and Spectre reveal that perfect information protection comes at an increasingly steep cost – how much innovation has been unlocked by the massive performance benefits of CPU pipelining, out-of-order execution, and speculative execution? And would we stifle the next such innovation if its security implications were not fully understood?

There must be an innovation in information disclosure that raises the bar to information theft. As a simple starting point, attacking the primacy of the password will make a positive dent in the right direction. And in my next post, I will take this one step further to discuss how data storage and data transmission must similarly evolve.

sethhallem
Contributor

Seth Hallem is the CEO, Co-founder, and CTO of Mobile Helix. Seth heads up Research and Development of the LINK encrypted app which provides secure end-to-end mobile workflows for reviewing, modifying, and sharing documents.

Seth’s academic and professional experience is in software architecture, development, and security. While at Stanford, Seth become a co-founder and the CEO of Coverity, Inc., the leader in advanced software development testing. Seth served as Coverity’s CEO from 2002 to 2010, during which time the company grew to over 150 employees and 1,000 customers worldwide. Coverity was acquired by Synopsys for $375 million in February 2014.

Seth has a bachelor’s degree in Computer Science from Stanford University where he also started a PhD program in advanced software analysis techniques. In 2008 Seth was recognized as an MIT “TR35” recipient for contributions in the field of software quality and testing.

The opinions expressed in this blog are those of Seth Hallem and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.