To gain insight on the months ahead as they relate to IT attacks, malware, cloud security, and the impact of virtualization on security, we recently chatted with Simon Crosby, former CTO of Citrix Systems' data center and cloud business. Crosby recently founded a cloud security startup, Bromium, with Gaurav Banga, former CTO and senior vice president at Phoenix Technologies, and Ian Pratt, chairman of Xen.org and co-founder of XenSource.
What do you think 2012 may bring in terms of malware?
Crosby: I think you will see, obviously, a growth. By the way, the growth path in malware is currently exponential per year. That will continue. That's obvious. I think you'll see, in the U.S. large enterprise and maybe even in the federal infrastructure, another major compromise next year. It will be incredibly bad and incredibly embarrassing. That is, to say, very succinctly, we are now in a state of ongoing national cyber espionage. It's not cyber war, but it's cyber espionage on a grand scale. That's absolutely going to carry on. However, I do think the year ahead heralds a fantastic opportunity. It will be the first time when virtualization hardware and its uses within computer systems, generally, dramatically change the odds in favor of security.
How is that?
Crosby: We're in a really bad state in the traditional IT world. Here's a good example. I was sitting with a very large military organization and they tell me that they are required to have two of everything. Two firewalls. Two web application firewalls. Two endpoint security measures. The question is, why two? They have to have diversity of vendors. Then they can have some degree of certainty that they will have more protection. Is two good enough? They don't know.
Wait a minute, just to understand, they have two of each in-line? Two WAFs, two --
Crosby: Right. That's merely a sign of how desperate the times are. The existing approach, blacklisting, is broken. Whitelisting is very useful for the stuff you know. While you can tell that the programs that you use, your applications and your operating system, are in a certain state, you can't tell what's going to happen when they process bad data. That's what happens when you get attacked. Your browser is not malicious. It's just that when your browser happens to go to a particular website and pick up a particular attack, then it's going to attack. Whitelisting is great. It just can't go far enough because it has no way of reasoning about the unprecedented use of code.
If you look at the various vendors who have been trying to get there, I think if you look at various segments of the industry, we're all trying to get to the same place. And that is a more trustworthy, more reliable infrastructure.
Where is that, and how do you think they get there?
Crosby: Look at it this way. The desktop virtualization vendors are trying to go for this path whereby the virtual desktop is more secure. It is, in many respects, because it's centralized. It's not because it doesn't deal with the attack coming in through the browser, say. The traditional endpoint security guys or network security guys are trying to produce ever-better detection methods. Now we've gotten to the point where they're deploying fuzzy logic, which by the way doesn't inspire me. Fuzzy logic is not a good way of inspiring trust in a customer. Then you have the DLP folks, who are trying to sneak ever more invasive controls on the desktop. The problem is a good attack can get by them.
They're all trying to head for this same particular point, which is to know that what you have that is running is trustworthy and has not been compromised and to seek control over that.
What are the tools that we have that can help?
Crosby: I'm 100 percent convinced that the only way to build more trustworthy infrastructure is to build trust into the system from the ground up. Virtualization technology, so hardware virtualization, and whatever other hardware capabilities we have are the most fundamental building blocks for achieving granularity of isolation of particular run-time context. From that, we can then build a more trustworthy infrastructure.
How far away do you think we are?
Crosby: This year for sure, you will see products arrive that exploit virtualization, particularly on the endpoint but also in the cloud, in order to deliver a far more robust notion of trust and trustworthiness into running software. That will be a big departure point for the industry. Moreover, I think you're going to see more and more people starting to rely on trusted boot and trusted bootstrap, both on the server and the clients, so that you can attest to the state of the running software stack, at least when it starts running, then possibly using the hypervisor to do checking, to ensure that the run-time stack remains unmodified.
The first technology that you'll see there will be something like Intel's and McAfee's DeepSafe. The specific goal there is to use virtualization technology, a hypervisor, to continually check in an out-of-band way that a running software system has not been compromised.
The hypervisor provides this alternative execution environment, which is arguably a small footprint, highly privileged, difficult to attack, in which you can use as a locus of your security infrastructure, to check or to make positive statements around trust.
That will be a big change. A big, big change. Hypervisors could potentially go to market as part of a stack delivered by an OEM, in other words as part of the basic stack delivered on the box. You have a whole different toolset coming to market around that.
What are your thoughts on public cloud security in the year ahead?
Crosby: I'm convinced that the infrastructure that folks such as Amazon are building is far better than any infrastructure that the enterprise can build itself. When you have humans running around amongst your data, it is a bad thing. Put it this way, I have a ton of stuff up on Amazon S3. There are another three and a half billion objects up there. Go find my stuff. Go find the hard disk on which it is. Moreover, they have the resources to deal with distributed denial-of-service attacks, which every enterprise does not have. They have much better global points of presence to deliver applications from, and also in order to have the intelligence as to where new attacks are arriving.
So you do see public cloud offering a higher degree of security, generally.
Crosby: They have an ability to secure the infrastructure in a formal, profound way at an architectural level that the average enterprise using the average enterprise software system does not have. In general, where security goes awry is not where the humans are out of the loop, it's where the humans are in the loop that things go wrong and humans make mistakes.
Now, that's not to say that you want to occasionally have a cloud provider being compromised. However, I see compromises of large enterprises every day. I don't see compromises of large, public cloud providers. At the same time, it's not the case that that is where you would run your average enterprise application. The problem that enterprises face is really a regulatory and fiduciary one.
George V. Hulme writes about security and technology from his home in Minneapolis. You can also find him tweeting about those topics on Twitter @georgevhulme.