• United States




The ethics of creating secure software

Sep 07, 20185 mins
Enterprise ApplicationsSecuritySoftware Development

The permeation of software into every aspect of our lives makes it impossible to avoid. Software has transcended from a technical process into the realm of social morality. Therefore, the consequences are on a massive scale across the whole of society. Security of that software is not a technical question, but a moral one, and companies need to treat it that way.

ethics typewriter keys values morals
Credit: Thinkstock

Software development has shifted from simply a technical process to an exercise of social morality. In the same way crash testing became a mandated part of automotive manufacturing once cars became ubiquitous, security must become a part of the software development life cycle from the beginning.

As with vehicle safety, software security is often an added cost for organizations that have not yet implemented basic security hygiene. Some company leaders may be tempted to ignore the need for security, thus passing the cost and risk of insecure software on to consumers and the internet ecosystem at large. Similar to unsafe cars affecting more than just the car owner, insecure software can affect third parties through DDoS attacks and the provide the ability for attackers to use insecure computers to anonymize their activities.

Software developers and security teams wield not just influence but responsibility to challenge this temptation. The good news for businesses is that it is actually less expensive and more effective for companies to maintain a high standard of security for their products and services by integrating security earlier in the development life cycle.

There are a few steps these teams can take to use their influence and change the perception of security at their organizations, across all levels.

Repositioning security from a burden to a differentiator

The first step is to treat security as a competitive advantage, instead of a burden that slows down production. This requires a major shift in how many organizations operate and prioritize.

Consumers are becoming increasingly aware of security and wary of companies that break their trust. As a result, security has become make-or-break for many organizations. Many users left Facebook, for example, in the wake of its public interrogation about the mishandling of user data. No longer can organizations rely on user apathy to get away with poor data security practices.

At a minimum, organizations need to make sure that the software they develop adheres to industry standards for security on release and that it is easy to issue patches as new vulnerabilities arise. For organizations new to this practice, the OWASP Top 10 is a good baseline of the most common and dangerous vulnerabilities in software.

This spike in consumer attention to security shouldn’t impact deployment cycles if properly implemented. Developers trained properly on software security basics can take ownership of the security of the code they write and make sure it is vulnerability-free before it reaches code review.

By making security a requirement for code quality along with efficiency and effectiveness, developers can change attitudes about security and organizations can gain consumer loyalty.

Encouraging fair vulnerability disclosure

Another step in this philosophical shift is changing how businesses approach vulnerability disclosures. Bug bounty programs present great opportunities for no fault disclosure, but many organizations are still skeptical of hackers’ motivations and understandably wary of asking to be hacked.

The scale of the ongoing battle between cybercriminals and defenders is too big for internal security teams to tackle on their own. And the reality is most open source components remain unpatched once they are built into software. Organizations need outside help, but to get it, they need to encourage white hat hackers to research without fear of repercussion. Bug bounties often do not go far enough to protect their participants—narrow scopes and restrictive legal disclaimers may prevent hackers from looking as thoroughly as they could, or reporting everything they find. This may contribute to the volume of known but undisclosed vulnerabilities, which could be in use in a large number of applications. 

The industry needs to embrace “coordinated disclosure” as a standard practice, whereby the researcher and the vendor work together on corrective measures and collaborate on disclosing vulnerabilities. If the good guys fear what will happen if they speak up about a vulnerability, they will not share what they find, or worse, never look at all.

Relaxing disclosure restrictions will lead not only to more secure software but to greater information sharing, which builds a more cohesive community of developers, internal security teams and independent security researchers. This kind of community is critical as we work toward the common goal of securing the software that powers our world.

A good place to start is by creating a disclosure portal or at least an email address on the company website so researchers can easily report any vulnerabilities they discover. Once this is in place, organizations need a system to validate, prioritize and remediate any vulnerabilities that come through. In addition, a repository of non-public vulnerabilities outside of the National Vulnerability Database (NVD) may help us keep a more consistent record of vulnerabilities. Developers could view and check this alongside the NVD to ascertain the risk of their open source components.

Software powers financial algorithms, commercial air travel, election systems, our handheld devices and even our cars. Something so critical to our daily lives must be highly regarded and prioritized. Ignoring security problems in software is not only unsafe, but unethical. Business leaders have a moral duty to create secure products that keep data safe, and to encourage the security and developer communities to report significant flaws that could undermine their software.


Chris Wysopal is CTO at Veracode, which he co-founded in 2006. He oversees technology strategy and information security. Prior to Veracode, Chris was vice president of research and development at security consultancy @Stake, which was acquired by Symantec.

In the 1990s, Chris was one of the original vulnerability researchers at The L0pht, a hacker think tank, where he was one of the first to publicize the risks of insecure software. He has testified before the U.S. Congress on the subjects of government security and how vulnerabilities are discovered in software.

Chris holds a bachelor of science degree in computer and systems engineering from Rensselaer Polytechnic Institute. He is the author of The Art of Software Security Testing.

The opinions expressed in this blog are those of Chris Wysopal and do not necessarily represent those of IDG Communications Inc. or its parent, subsidiary or affiliated companies.