• United States




(Managing) risky business

Jan 23, 20184 mins
App TestingRisk ManagementTechnology Industry

How to ensure sound and conflict-free risk management decisions – and usually deliver secure code.

Focus on risk management is a common element of cybersecurity today. To take two examples, my LinkedIn network includes a lot of people with the title of “risk executive,” and government initiatives and policies in the US and EU aim to encourage or mandate risk-based decision-making about security.

It has to be that way – we can’t achieve perfect security, and if we tried we’d have to invest infinite resources. Instead, we try to invest in enough security so that the expected consequences of attacks are acceptable. We expect that the most serious attacks will fail, and the attacks that succeed won’t do much harm.

The challenge of risk management is deciding “how much.” Risk is defined as the product of threat (how hard is an adversary going to try to attack our system and how good the adversary is at attacking) times vulnerability (how likely is it that there’s a way for the adversary to get in) times consequences (what harm can the attack do if it manages to find and exploit a vulnerability). Unfortunately, we don’t have good ways of measuring any of those things I said we need to multiply.

Instead of measuring threat, vulnerability, and consequences, we rely on experience and judgment. Government agencies, industry groups, and auditors provide advice or requirements that they believe – or hope – are appropriate to the assets that need to be protected, systems that need to be operated, and experience with threat actors. Sometimes the advice works well and systems operate securely. When the advice is flawed, smart organizations learn from their mistakes and update the guidance they issue. (More on that last point in a future blog.)

The “bug bar”

Software development organizations are performing risk management when they decide what security requirements to impose and what security bugs to treat as “must fix.” When the software security team specifies mandatory training, tools, and processes, they are really applying their experience with threats vulnerabilities, and perhaps consequences to tell the developers how to achieve an acceptable level of risk at a cost that’s acceptable in terms of time and effort. Meeting the requirements enables the developers to appropriately create secure software without having to be security experts.

Sometimes the development organizations find that it will be costly in resources or schedule to fix a security bug. The bug might have been discovered late in the development cycle or it might be in a part of a system where even a minor change would necessitate a time-consuming test pass. What to do then?

One way to prepare for that situation is for the security team to create a “bug bar” that assigns a severity rating to each likely scenario that can result if a vulnerability isn’t fixed. An Elevation of Privilege vulnerability in a network-facing server component would have a critical severity (think Code Red) while a denial of service vulnerability that would require restarting a single client application might have a low severity. A sample bug bar that the Microsoft SDL team created a few years ago is available.

The bug bar alone provides useful guidance to a development team that is trying to decide how to deal with a late-breaking or high-impact bug, but it also has additional value. A secure development process should include a final security review that confirms that the development team has in fact met the security requirements before release. If the team has done that, the software goes out the door and the team goes off to the release party.

If there are unmet requirements, the bug bar can help guide the management review that decides whether to fix the bug in the next release or delay release and fix it now.

The review should involve a discussion between a development team manager and a software security team manager at a peer level with the development manager (e,g. vice president to vice president).

The product team manager has the ultimate authority and responsibility to accept risk, while the security team manager has the responsibility – and experience and judgment – to ensure that the development manager has a clear understanding of the risk being accepted.

In my experience the combination of sound, secure development requirements, a clear bug bar and (when necessary) a management review between security and development peers leads to sound and conflict-free risk management decisions – and usually to secure code.


Steven B. Lipner is the executive director of SAFECode, a non-profit organization dedicated to increasing trust in information and communications technology products and services through the advancement of effective software assurance methods. As executive director, Lipner serves as an ex officio member of the SAFECode board. In addition to providing strategic and technical leadership, his responsibilities include representing SAFECode to IT user and development organizations, to policymakers, and to the media.

Lipner is a pioneer in cybersecurity with over forty years’ experience as a general manager, engineering manager, and researcher. He retired in 2015 from Microsoft where he was the creator and long-time leader of Microsoft’s Security Development Lifecycle (SDL) team. While at Microsoft, Lipner also created initiatives to encourage industry adoption of secure development practices and the SDL, and served as a member and chair of the SAFECode board.

Lipner joined Microsoft in 1999 and was initially responsible for the Microsoft Security Response Center. In the aftermath of the major computer “worm” incidents of 2001, Lipner and his team formulated the strategy of “security pushes” that enabled Microsoft to make rapid improvements in the security of its software and to change the corporate culture to emphasize product security. The SDL is the product of these improvements.

At Mitretek Systems, Lipner served as the executive agent for the U.S. Government’s Infosec Research Council (IRC). At Trusted Information Systems (TIS), he led the Gauntlet Firewall business unit whose success was the basis for TIS’ 1996 Initial Public Offering. During his eleven years at Digital Equipment Corporation, Lipner led and made technical contributions to the development of numerous security products and to the operational security of Digital’s networks.

Throughout his career, Lipner has been a contributor to government and industry efforts to improve cybersecurity. Lipner was one of the founding members of the U.S. Government Information Security and Privacy Advisory Board and served a total of over ten years in two terms on the board. He has been a member of nine National Research Council committees and is named as coinventor on twelve U.S. patents. He was elected in 2010 to the Information Systems Security Association Hall of Fame, in 2015 to the National Cybersecurity Hall of Fame and in 2017 to the National Academy of Engineering.

The opinions expressed in this blog are those of Steve Lipner and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.