Americas

  • United States

Asia

Oceania

stevelipner
Contributor

Conway’s Law: does your organization’s structure make software security even harder?

Opinion
May 07, 20185 mins
Patch Management SoftwareSecuritySoftware Development

Why secure development programs s쳮d in organizations.

security network of computers with locked screens
Credit: Thinkstock

These days I find myself in a lot of meetings where folks talk about things like risk management and compliance as well as software security. Those meetings have gotten me thinking about how and why secure development programs succeed in organizations.

When we created the SDL at Microsoft, my team was part of the Windows security feature development organization. Trying to figure out secure development was one of our roles and initially the smallest part of the team. But secure development was part of the product engineering organization, so the approach we took – pretty much from Day One – emphasized training, motivating and enabling the engineers who wrote the code to design and develop secure software. We started with training and motivation. Over time, we added more and more enablement in the form of tools and very specific secure development guidance.

What we didn’t do was put a lot of emphasis on after-the-fact compliance or testing. The SDL was mandatory, but our approach, even when we did penetration testing, was to use it early to look for specific design problems. (This was actually adversarial design and code review although we called it penetration testing.) We also used some penetration testing later in the development cycle to confirm that the developers had followed the process, applied their training, run the tools and fixed any problems the tools reported.

We had security people assigned to work with the development groups, but we made their role primarily providing advice on threat modeling and helping with gnarly problems – not checking on developers. Because the process was mandatory we wanted to confirm that it had been followed, but we tried to do that with automation that could be integrated with the tools and build systems so that a single database query would tell us any places where the developers hadn’t followed the process or met the requirements.

As a result, the developers understood pretty quickly that product security was their job rather than ours. And instead of having twenty or thirty security engineers trying to “inspect (or test) security in” to the code, we had 30 or 40 thousand software engineers trying to create secure code. It made a big difference.

Conway’s Law

Back to risk management and compliance. Early in my professional career, I came across Conway’s Law. Conway’s Law says “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” It’s normally interpreted to say that the structure of systems is the same as the structure of the organizations that create them. For software development (from Wikipedia): “…the software interface structure of a system will reflect the social boundaries of the organization(s) that produced it, across which communication is more difficult.”

The interaction between development teams isn’t the same as the interaction between a development team and a security team. But thinking about Conway’s law, I’ve been wondering if software security assurance teams that aren’t part of a development organization might be doomed by the social boundaries of their organization to trying to achieve software assurance with after-the-fact inspection and testing. If you’re part of a compliance or audit or inspection team that’s organizationally separate from development, the natural approach may be to let the developers build the software however they build it, and then check it afterwards to see if it’s secure. That approach conforms to the model of security as an outside compliance function. But from the perspective of secure development, it’s a flawed approach. 

Why?

It’s really a tough approach to make work, because it means the developers (and the security team) only find out about security problems after the software is pretty much ready to ship. This approach, at best, makes it difficult and expensive to correct errors and increases pressure to “ship now and accept the risk.” In this model, you say you’ll correct the security bugs in the next release – and hope no vulnerability researcher discovers them, and no bad guy exploits them, in the meantime. Not good for product or customer security. Not good for corporate reputation either. And this situation can be bad for developer morale too. I remember back before we created the SDL when the Seattle Times published big front-page headlines after the discovery of vulnerabilities in a new operating system version. My security team was unhappy, but the development staff had a lot of pride in the company, and you can believe they noticed the headlines too!

I’m not saying that the only way for a software security program to work is for the software security team to be part of the development organization. But I am saying that a successful software security team has to understand the way the development organization works, work cooperatively with the development organization, and focus on enabling them to build secure software as part of their task of building software. This is why I keep coming back to Conway’s Law. A lot of software development is about communications. How the different organizations within a company developing software communicate is a key factor in the successful creation of new secure products.

That focus on enablement implies a commitment to training, tools and guidance for the developers as well as an approach to compliance that relies on artifacts of the development process rather than after-the-fact effort. Especially today, with development teams using agile or devops approaches and feeling pressure to ship in hours or days rather than months or years, that’s really the only way software security can work effectively for an organization. 

stevelipner
Contributor

Steven B. Lipner is the executive director of SAFECode, a non-profit organization dedicated to increasing trust in information and communications technology products and services through the advancement of effective software assurance methods. As executive director, Lipner serves as an ex officio member of the SAFECode board. In addition to providing strategic and technical leadership, his responsibilities include representing SAFECode to IT user and development organizations, to policymakers, and to the media.

Lipner is a pioneer in cybersecurity with over forty years’ experience as a general manager, engineering manager, and researcher. He retired in 2015 from Microsoft where he was the creator and long-time leader of Microsoft’s Security Development Lifecycle (SDL) team. While at Microsoft, Lipner also created initiatives to encourage industry adoption of secure development practices and the SDL, and served as a member and chair of the SAFECode board.

Lipner joined Microsoft in 1999 and was initially responsible for the Microsoft Security Response Center. In the aftermath of the major computer “worm” incidents of 2001, Lipner and his team formulated the strategy of “security pushes” that enabled Microsoft to make rapid improvements in the security of its software and to change the corporate culture to emphasize product security. The SDL is the product of these improvements.

At Mitretek Systems, Lipner served as the executive agent for the U.S. Government’s Infosec Research Council (IRC). At Trusted Information Systems (TIS), he led the Gauntlet Firewall business unit whose success was the basis for TIS’ 1996 Initial Public Offering. During his eleven years at Digital Equipment Corporation, Lipner led and made technical contributions to the development of numerous security products and to the operational security of Digital’s networks.

Throughout his career, Lipner has been a contributor to government and industry efforts to improve cybersecurity. Lipner was one of the founding members of the U.S. Government Information Security and Privacy Advisory Board and served a total of over ten years in two terms on the board. He has been a member of nine National Research Council committees and is named as coinventor on twelve U.S. patents. He was elected in 2010 to the Information Systems Security Association Hall of Fame, in 2015 to the National Cybersecurity Hall of Fame and in 2017 to the National Academy of Engineering.

The opinions expressed in this blog are those of Steve Lipner and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.