Americas

  • United States

Asia

Oceania

stevelipner
Contributor

Software products aren’t cookies

Opinion
Aug 20, 20185 mins
SecuritySoftware DevelopmentVulnerabilities

Understanding the security of third-party components.

security audit - risk assessment - network analysis
Credit: Thinkstock

When I started working on computer security, organizations that worried about security were concerned about the security of software that they created themselves and shipped to their customers. Today, a lot has changed – many (most?) organizations that deliver software products or services rely a lot on components that other organizations or individuals have created. Many of these “third-party components” are open source software while some are commercially-licensed libraries or subsystems. There are a variety of ways of incorporating third-party components – from copying a source code snippet found on the web to calling a library to embedding a complete functional module or product. (“Third party” refers to the provider of the component that’s incorporated – the first party is the supplier of the product and the second party is the customer.)

The security of third-party software components is a serious issue. A security vulnerability in a third-party component can expose a software product or service to attack just as a vulnerability in code your developers have written. This means that your developers or suppliers who decide to incorporate third-party components need to pay attention to the security of the components, just as they do to the security of developer-written code. Recognizing this fact, SAFECode published a free guide to the secure use of third-party components early last year.

Over the last few months, I’ve been involved in quite a few discussions with customers and policymakers who are worried about software “supply chain” and in particular about the security of third-party components. Some of the discussions have focused on giving customers confidence that their suppliers apply practices such as those documented in the SAFECode guide to manage the third-party code their products include. This is a very reasonable concern for customers to express: a supplier should take responsibility for the product or service delivered, including having a sound and effective approach to managing the security of third-party components.

But some of the discussions have taken a different turn. Recently, I’ve heard a lot of questions about “third-party component transparency” – the notion that if a developer incorporates third-party components, the developer should provide end customers with a complete listing of those components down to the individual version number for each component. The idea is that if there’s a report of a vulnerability in a third-party component, the customer will be made aware of that and do something in response.

The pressure for component transparency seems to be based on an analogy to food labeling. If one of my kids is allergic to peanuts, I can look at the list of ingredients before I buy a box of cookies and if the list includes peanuts, I’ll buy something else. Easy and effective.

But software products and services aren’t cookies. A product that incorporates a vulnerable component isn’t necessarily affected by a particular vulnerability – it may not expose the vulnerability to external input or call the vulnerable interface (the example I’ve heard is embedding OpenSSL but only to call on its cryptographic random number generator). And users may not be able to do anything effective to protect themselves in any case. They shouldn’t replace the old version of the component with a new one without regression testing to make sure the new one doesn’t cause other problems for the product, and the product developer should already be doing that. They may be able to make a configuration change to mitigate the impact of the vulnerability, but they probably need information about how the product is affected from the developer before they can do that effectively.

The common thread is that the product developer is in a position to review the third-party component vulnerability and the product’s use of the component, and then tell customers “we don’t use it that way; nothing to worry about,” or “we’re releasing a patch with an update to the vulnerable component,” or “we’ll be releasing a new version, but in the meantime, here’s a configuration change that will protect you.” If a product developer incorporates a third-party component, he or she should be doing the required analysis and providing customers with that sort of information.

The thing a customer can do with a list of third-party components is ask the product developer’s support line for information when a vulnerability in a component is discovered. But all those requests for information just create extra noise in the developer’s system and probably don’t help get the answers any faster. And they may actually distract the developer from working on information, testing and patch development that help to protect customers not only from vulnerabilities in third-party components but also from the other concerns that a comprehensive secure development process will address.

So my bottom line is that developers absolutely have to manage their secure use of third-party components. But it’s important to understand the differences between software products and cookies, and to allow developers to provide customers with information they can actually use.

stevelipner
Contributor

Steven B. Lipner is the executive director of SAFECode, a non-profit organization dedicated to increasing trust in information and communications technology products and services through the advancement of effective software assurance methods. As executive director, Lipner serves as an ex officio member of the SAFECode board. In addition to providing strategic and technical leadership, his responsibilities include representing SAFECode to IT user and development organizations, to policymakers, and to the media.

Lipner is a pioneer in cybersecurity with over forty years’ experience as a general manager, engineering manager, and researcher. He retired in 2015 from Microsoft where he was the creator and long-time leader of Microsoft’s Security Development Lifecycle (SDL) team. While at Microsoft, Lipner also created initiatives to encourage industry adoption of secure development practices and the SDL, and served as a member and chair of the SAFECode board.

Lipner joined Microsoft in 1999 and was initially responsible for the Microsoft Security Response Center. In the aftermath of the major computer “worm” incidents of 2001, Lipner and his team formulated the strategy of “security pushes” that enabled Microsoft to make rapid improvements in the security of its software and to change the corporate culture to emphasize product security. The SDL is the product of these improvements.

At Mitretek Systems, Lipner served as the executive agent for the U.S. Government’s Infosec Research Council (IRC). At Trusted Information Systems (TIS), he led the Gauntlet Firewall business unit whose success was the basis for TIS’ 1996 Initial Public Offering. During his eleven years at Digital Equipment Corporation, Lipner led and made technical contributions to the development of numerous security products and to the operational security of Digital’s networks.

Throughout his career, Lipner has been a contributor to government and industry efforts to improve cybersecurity. Lipner was one of the founding members of the U.S. Government Information Security and Privacy Advisory Board and served a total of over ten years in two terms on the board. He has been a member of nine National Research Council committees and is named as coinventor on twelve U.S. patents. He was elected in 2010 to the Information Systems Security Association Hall of Fame, in 2015 to the National Cybersecurity Hall of Fame and in 2017 to the National Academy of Engineering.

The opinions expressed in this blog are those of Steve Lipner and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.