Last September 11, the country got religion when it came to information security—at least until the smoke cleared. Nevertheless, from their new pulpit in the White House, Richard Clarke and Howard Schmidt are still trying to sell vendors, executives, politicians and ordinary citizens on a vision of a more secure future. And converts don't come easily.
"About half of our job is marketing," admits Clarke, President Bush's cybersecurity adviser and chairman of the president's Critical Infrastructure Protection Board, created last October. Clarke, 51, made his name as President Clinton's counterterrorism adviser for most of the 1990s; vice chair Howard Schmidt, 52, is the former CSO of Microsoft. Together, the two men are information security's most prominent preachers.
These days, when they make newspaper headlines at all, it's for reporting doomsday scenarios about cyberattacks. At worst, their comments seem like needlessly alarmist attempts to get people to care about weaknesses in the nation's financial, telecommunications, transportation systems and other pieces of the critical infrastructure. At best, for CSOs, they're preaching to the choir.
In fact, in a lot of ways, the duo's challenges aren't so different from that of a CSO. Their roles are new, their power is limited, and their future is somewhat uncertain as Homeland Defense undergoes a restructuring. But whereas CSOs are influencing policy, spending and awareness in an organization or perhaps an industry, Clarke and Schmidt do so for the nation.
CSO went to their offices two blocks west of the White House not to hear their spiel about why corporate America should care about critical infrastructure protection—you already know about that. Instead, we drilled them about how they might use their power to influence everything from a controversial Freedom of Information Act (FOIA) exemption to vendor accountability to procurement by the federal government. What they had to say may surprise you.
CSO: You've said that the FOIA exemption is the single most important policy change to improve information security. [Editor's note: This controversial exemption would ensure that information given to the federal government about computer attacks would not be made public.] Why is it so important?
Richard Clarke: If you look at the Nimda virus last fall—a major attack that caused billions of dollars worth of losses to the private sector—not one company called us up to tell us they had been attacked because they wanted to be able to keep it secret. They don't want their customers and their stockholders to lose confidence. We understand that. But the result is that we have an inadequate perception of what is going on in the American information infrastructure.
Sen. Robert Bennett [R-Utah] probably puts it best. He says, Imagine you are a commander in charge of a battlefield, and you can only know about 15 percent of what is going on in that battlefield. How could you defend yourself? Well, if you look at our critical infrastructure, about 85 percent of it is in the private sector, and unless we can have some knowledge as to what's going on there—like attacks, viruses, worms, denial-of-service attacks—then we'll never be able to help defend it. Only by getting a FOIA exemption, narrowly written, will we ever be able to persuade companies that they can trust the government with information about vulnerabilities or hacks.
Is the exemption really necessary?
Clarke: Do you mean, are there already adequate provisions in the law that would exempt such information from a Freedom of Information Act request? Our lawyers say that the law as currently written would allow us to protect that information. But it doesn't matter what our lawyers say. Only by having corporate lawyers say it will companies be persuaded to give us that information. The companies' lawyers believe they need additional protection; therefore, we need to get additional protection.
If the law does pass, will an onslaught of people begin reporting information to you?
Howard Schmidt: It's hard to tell. We think in some cases we'll have companies come forth right away. In other cases there may be some hesitation; the general counsels of the various companies will have to look even deeper to find reasons why they may not be able to share information. There's still the perception that a company's ability to secure itself is a reputational issue, and that's justifiable. I'm sure there will be a little bit of giving of information, seeing how that plays out. I don't think it's suddenly going to open the floodgates.
Are you advocating any kind of tax benefits for spending on security?
Clarke: No, I think there's enough benefit inherent for security spending that we don't need to give people a tax break. The benefit comes from being secure. It's more expensive in the long run to be insecure.
Is that a hard thing to sell CFOs on?
Schmidt: Not at all. The cost to recover from a virus attack, a denial-of-service attack or an intrusion escalates considerably [from that of preventive measures]. When the Melissa virus hit at a company that I had some insight into, it took about $14 million worth of labor effort, reconstitution, to bring that whole system up online after 10 days. [Later, with better processes in place] when Anna Kournikova hit the same company, they were able to contain it within 30 minutes. That 30 minutes translated into about $12,000 worth of effort—quite a difference from $14 million. That's why the CFOs are saying, Hmm, it might cost me on the front end to do some risk management, but in the long term, I'm going to save money and reduce total cost of ownership.
As sad as it is to say, it seems like the viruses and worms have actually helped as far as demonstrating that ROI.
Clarke: I think there's a silver lining to some of them because you know when you get hit. Frequently, when people penetrate networks, we don't know it because they're successful at it. They don't leave traces. It's helpful when we have major viruses and worms and denial-of-service attacks because they're noisy and leave fingerprints, and we know it's out there. People are then motivated to fix it. But that's not the case when you have stealthy penetrations that leave back doors, Trojan horses, logic bombs.
What's the administration's position on holding vendors accountable for products that aren't secure? And liability for products that aren't secure?
Clarke: Those are two related but separate issues. One is holding vendors accountable, and one is doing it in court. We are very much in favor of holding vendors accountable. When a product fails, the vendor has a responsibility to quickly identify a way of fixing it and getting that patch out. And the patch not only should fix the problem, it shouldn't interact badly with other widely utilized applications. It does us no good to get a patch that solves the vulnerability but then makes it impossible to use applications from other companies.
It's not terribly valuable to litigate these problems. We'd like to find solutions that are quicker than long, multiyear litigation.
Schmidt: There are two other components. One of those is the market drivers that would induce people to be more careful and more responsive. People want to buy the things for which they have the best support. When you buy a car, if it doesn't work well, you're going to think twice before you buy from that maker the next time.
The second piece is if you look at the identification of what might be wrong with something. After Nimda, an informal survey asked those affected, Why were you affected, when the patches had been out for so long? The number-one answer was, people didn't know that they needed to have the patches installed, which goes back to the accountability to the vendors.
What else is involved with convincing the vendors to create more secure products?
Clarke: The vendors tell us, We could create more secure products, but no one wants them. Then we talk to the procurement people—those in banking, finance, energy, government—and ask, Do you want more secure products? And they say, Yes! but the vendors won't make them. That's the dialogue of the deaf that Howard and I try to bridge. We take the critical infrastructure procurement people and the vendors by the hand and say, Let's agree that we're going to have more secure products. There's actually a real role for us to bring people together to have dialogues that you would think would naturally occur. We also have a role that I call the honeybee role—we fly around flower to flower proliferating the message and sharing information, so that we're able to learn what products are out there. We don't recommend certain kinds of brands, but we do recommend certain kinds of services.
John Gilligan, the CIO of the Air Force, recently threatened to stop using Microsoft products until they became more secure. We've heard similar rumblings from others. How feasible is it to force government agencies to buy only certain products?
Clarke: The federal government tried 20 years ago to only procure IT products that were security-certified. It didn't work because very few of the products could get certified in a timely manner. Exceptions were granted because people could demonstrate that there was no product available. So it became something of a farce.
We're looking at whether we could do it in a smarter way. We don't want to jump headlong into a full-up system of only procuring things that meet certain standards, but we do think there's a role for smart procurement. We think that if there is a product that has been certified under the NIAP [National Information Assurance Partnership] program of the Commerce Department, it ought to be given an advantage.
Under the NIAP, you can bring your product, software or hardware, to a federally approved laboratory for testing, and if it passes, then it's NIAP-certified. It used to be that the federal government did the testing itself, but there were so few people who could do the testing in the federal government that it took a long time. So what we've done now is the federal government certifies private sector laboratories to do the testing, so there are many more places to do the testing, and there have been a few products certified. You can find them on the NIAP webpage. [That program] is about 5 years old. We are looking at whether we can get more products certified and select some key products, and only have the federal government procure certified products in key areas.
Schmidt: We've seen the evolution of attacks against our IT systems. Each generation of products gets better and better at resisting those things, but it still takes time to get these things created, identified, coded, shipped and then out to the public. If we were to say, Turn off the spigot of technology coming into the government, we'd be shooting ourselves in the foot, because the next generation is going to be better than the one that we're currently running, and oftentimes you're running two generations behind to begin with. So we have to look at the balance about what do we need to do to look at the smart procurement, while phasing in a higher level of standard and making sure the product is going to meet our needs today and not have to sit in a static mode for five years while we're waiting for things to catch up—waiting for the approval process, waiting for people to make changes in their product to meet the threats of the day.
And then what about the old adage that you don't know what you don't know. Both of us get asked all the time, What do you see as the next generation of attacks? Well, you don't know what you don't know. It could be something we're not aware of—it takes place down the road. And say, if that does occur, then all the sudden those products that have been certified are no longer valid. So we have to balance all those things into it, and it goes back to—that core thing I mentioned earlier—using the bright people from government, academia and industry all together to figure out how to make this work today as well as in the future.
If you look at the state of critical infrastructure on Sept. 10 versus now, what have the concrete accomplishments been?
Clarke: I think we can point to measurable improvements with the federal government's security in its cyberspace networks. The budget the president sent to Congress in February asks for a 64 percent increase in funding to defend federal departments and agencies. That's almost 6 percent of the federal IT budget on IT security. We're trying to do two things with that. Obviously we're trying to fix very serious problems that the federal departments have. But we're also trying to set a model for the private sector, for members of corporate boards of directors, for CEOs. We want them to see that the federal government is spending 6 percent of its IT budget on IT security and ask, What are we doing at our company? Unfortunately most companies are not going to be able to say that they're spending anywhere near 6 percent on security.
You quote a report that most companies spend more on coffee than on security. Is 6 percent a benchmark? A catch-up?