Forty-two years ago, John F. Kennedy's commitment to landing a man on the moon and returning him safely to the Earth was the epitome of a "Grand Challenge"\u2014the attempt to tackle a problem in science or engineering that is easy to describe but monumentally difficult to solve. More recently, the field of supercomputing has used the Grand Challenge concept as a tool for guiding research and funding priorities for such activities as modeling the global climate or accurately predicting weather many days in advance. The notion of a Grand Challenge had left someincluding mewondering if computer security has an appropriate equivalent.Well, it does. In November, I had the honor of being included among 50 of the leading computer security researchers in the world in doing just thathelping to pinpoint the "Grand Research Challenges" we are facing today in information security and assurance. Conference organizers from the Computing Research Association (CRA) and the Association for Computing Machinery solicited short essays from around the world, then invited the authors of the 50 most promising proposals to a four-day intensive workshop aimed at finding the commonalities in those proposals and articulating them.After days of round-the-clock meetings and late-night wordsmithing, this predictably cantankerous crowd managed to come up with four challenges deemed worthy of "sustained commitments." We identified the hard problems that we don't know how to solve today but that might be solvable within a decade (assuming enough research dollars are spent). Perhaps most important, they are problems that need to be solved if we want to continue to enjoy the fruits of the computer revolution. First on the list of Grand Challenges is the elimination of "epidemic-style attacks" within 10 years. Certainly it would be nice to return to an Internet that is largely free of viruses, worms and spam. But it is interesting to note that the conference attendees don't think the solution to viruses and worms is for people to install antivirus software and keep their systems up-to-datetwo of the primary solutions recommended last year by the National Strategy to Secure Cyberspace. Instead, we agreed that what's needed is a fundamentally new approach to solving the problem, perhaps by moving more of the responsibility to Internet service providers.Large-Scale SystemsThe second Grand Challenge: Develop tools and principles for creating large-scale systems for applications that are really importantso important, in fact, that today these systems are largely still on paper (or at least on standalone computers not connected to the Internet). Two examples from the CRA workshop are medical records systems and electronic voting. In the case of medical records, we agreed that doctors and patients should be able to benefit from Internet technology without having patient records routinely stolen by Russians and ransomed back to the hospital administration (right?). And voting systems present all of the same security challenges with the added twist of auditing. We asked ourselves, How do you build a system that ensures the privacy of the ballot box while still preventing somebody from electronically stealing an election? Certainly, the second challenge seems more doable than the first. Various pieces of the puzzle have been discussed at length: Perhaps all we need to do is assemble these tools together in a complete whole. Some researchers argue, for example, that every electronic voting machine should have an internal little printer and a roll of paper just to prevent the computer system from accidentally zeroing out votes for one candidate and assigning them to another. But a competing proposal would have a second computer recording the votes with a digital camera. Indeed, this might not be a Grand Challenge at all if it weren't so terribly important and if we hadn't, as a society, done such a bad job with our voting system attempts to date.Measuring RiskThe third Grand Challenge doesn't seem all that difficultthat is, until you try to do it. It calls for developing quantitative measurements of risk in information systems. But then, consider this riddle: What's the percentage chance that a programming flaw will be discovered in Windows within the next 30 days that will allow an attacker to get administrative privileges on your system? And do the Linux and OpenBSD operating systems have a higher or lower chance of a similar flaw being discovered? Many CEOs would like answers to such questions. But with computer systems today, there is no reliable way to measure risk. If we could reliably measure the risk associated with a particular piece of software, we could then give an estimate of how much it would cost to decrease the riskor, alternatively, how much we could save by accepting it. Banks have been making these kinds of risk-benefit decisions for decades in the realm of physical security. Infosecurity professionals, on the other hand, have all but given up trying to rate the risk of different systems. Instead, the practitioners have developed sets of best practices that they hope will decrease the chances of a computer being compromised. Alas, there are many problems with "best practices." The most obvious is that they really don't tell you how secure you happen to be at the moment. Instead, they simply tell you that you are as secure as everybody else who is following the same practices. Likewise, best practices give no metric for making purchasing decisions. That's why reviews comparing antivirus systems or firewalls tend to stress other factors, such as how much the systems cost, how fast they run and how easy they are to manage. Today, we just don't have good tools for measuring and quantifying the actual differences between various security applications and appliances. Control FreaksOur final challenge is to make security easier to usespecifically, to give end users control over their own computers. That is especially important as we move into a world in which each person will have many different computers, all with different capabilities, architectures and security models.This "secure usability" chestnut is a hard one to crack. After all, for years security experts have been telling everybody else that security and usability are diametrically opposed: If you make a system more secure, you make a system harder to use, and vice versa. Making security something that users can understand might mean that we need to fundamentally change the way that we think about and work with information systems. Consider the role of education. It's easy to blame many of the recent Internet worm epidemics on the failure of users to download and install software updates. At the height of the Blaster worm, Microsoft was running full-page advertisements in many newspapers giving people instructions on how to enable XP's built-in Internet Connection Firewall. But this massive educational campaign wouldn't have been needed if Microsoft had instead configured XP to automatically download and install its patches. Are we better off trying to educate users who do not wish to be educated, or should we be automating as many processes as possible, knowing that those automated systems will occasionally make a mis- takeand that they, themselves, can be subverted? That's part of the riddle of the fourth Grand Challenge.Creating these challenges was a useful exercise for the researchers, academics and government employees who attended the workshop. But the real value of this work was putting a signpost into the ground pointing to the direction in which we should be marching. It's easy to get caught up in the tactical elements of computer security, with all of its encryption algorithms, public-key infrastructures, disk sanitization and other nuts-and-bolts issues. Ultimately, though, we need to start thinking more strategically about computer security, or else we are going to lose this war. Indeed, if we don't get a handle on the spam and worm problems soon, Internet e-mail could become a lost communication mediummetaphorically speaking, it could become the CB radio of the 21st century. We might see individuals and businesses disconnecting their computers from the Internet, deciding that the added benefits of being able to transfer files and download software are simply not worth the extra cost of eternal vigilance and the risk that something might go wrong with their computer systems. That isn't a far-fetched scenario. According to the Pew Internet & American Life Project, millions of people have already given up on e-mail because they don't want the spam associated with it.Nevertheless, I hope these challenges will be used as a starting point for research projects and for businessfolk who are thinking of starting new companies. There's clearly a lot of work to be done. Let's get started!