Americas

  • United States

Asia

Oceania

by Mark S. Merkow and Lakshmikanth Raghavan, authors of Secure and Resilient Software Development

Software security for developers

Feature
Sep 27, 201022 mins
Application SecurityDeveloper

Secure software development means consideration in every phase. Here are 9 key software security principles plus practical advice from a developer's point of view.

Software is ubiquitous, even in places you wouldn’t imagine. Software is so seamlessly interwoven into the fabric of modern living that it fades into the background without notice. We interact with software not only on home or office computers, but in our routine everyday activities—as we drive to the office in our cars, as we buy things at the supermarket, as we withdraw cash from an ATM, and even when we listen to music or make a phone call.

Just as software is everywhere, flaws in most of that software are everywhere too. Flaws in software can threaten the security and safety of the very systems on which they operate. The best way to prevent such vulnerabilities in software is to proactively incorporate security and other non-functional requirements into all phases of Software Development Lifecycle (SDLC).

We have also written a companion article on software security from the perspective of managing the application development process, looking at valuable concepts from OWASP’s CLASP methodology and tools for building developer awareness.

Security Activities in the SDLC—An Overview

Security begins from within

The only reliable way to ensure that software is constructed secure and resilient is by integrating a security and resilience mindset and process throughout the entire software development life cycle (SDLC). From the earliest days of software development, studies have shown that the cost of remediating vulnerabilities or flaws in design are far lower when they’re caught and fixed during the early requirements/design phases than after launching the software into production. Therefore, the earlier you integrate security processes into the development life cycle, the cheaper software development becomes in the long haul.

Many of these security processes are often just “common sense” improvements, and any organization can adopt them into its existing environment. There is no one right way to implement these processes—each organization will have to fine-tune and customize them for a specific development and operating environment. These process improvements add more accountability and structure into the system too.

Also see Security Testing of Custom Software Applications, excerpted from the authors’ 2010 book Secure and Resilient Software Development, on CSOonline.com

Figure 1 provides a high-level overview of the fundamental security and resilience processes that should be integrated into the various SDLC phases, from requirements gathering to deployment. Each process yields its own findings, and recommendations are prepared for appropriate changes to design, architecture, source code, use of third-party components, deployment configurations, and other considerations to help you to better understand and reduce risk down to an acceptable level. Here you will find guidance on practices that you should consider implementing in each phase of the SDLC:

[Figure 1—Security in SDLC]

SDLC Phase Zero – Developer Training

Even though training does not fit directly into any particular SDLC phase, it plays a very important role in improving the overall security and resilience of developed software. Training should be a prerequisite for anyone who has a role anywhere in the software development environment. All developers and other technical members of the software design/development/test teams should undergo security training that explains the responsibilities of their role, establishes the expectations for their part for security and resilience, and provides best practices and guidance for developing high-quality software.

Training is most effective when it is custom-made to focus on the area of expertise/interest of the target audience (developers, QA, project managers, etc) and includes organization-specific processes, terminology, and practices.

SDLC Phase One: Requirements gathering and analysis

[Security in the Requirements Phase]

The key security and resilience activities during the requirements gathering and analysis phase are intended to map out and document the nonfunctional requirements (NFRs) for the system under development. It is vital to have these ready before the translation of business requirements into technical requirements begins; designers need to understand the constraints they are expected to face and be prepared to answer the call for security and resilience, as well as other NFRs. To be effective, business systems analysts and systems designers should be sure they are very familiar with the environment in which they are operating, by reviewing and maintaining their knowledge about:

  • Organizational security policies and standards
  • Organizational privacy policy (which may have varying requirements in different places)
  • Regulatory requirements (Sarbanes-Oxley, HIPAA, etc.)
  • Other relevant industry standards (PCI DSS, ANSI-X9 for banks, etc.)

NFRs are then mapped against the critical security and resilience goals of:

  • Confidentiality and privacy
  • Integrity
  • Availability
  • Non repudiation
  • Auditing

Finally, these security requirements are prioritized and documented for subsequent phases.

SDLC Phase Two: Systems design

security in software design phase

[Security in the Design Phase]

Threat modeling and design reviews are the two major security and resilience processes that you will encounter during the design phase. There are two classes of vulnerabilities:

  • Design-related
  • Implementation-related vulnerabilities.

While the latter are very easy to find, the former are very expensive and time-consuming to locate and fix if they are not detected early enough in the SDLC. Security subject-matter experts should be deeply involved with the project during this phase to ensure that no bad design issues creep into the design and architecture of the software or the system. Figure 3 illustrates the inputs and deliverables mapped for this phase.

Detailed threat modeling is an excellent way to determine the technical security posture of an application to be developed or under development. It consists of four key steps:

  • Functional decomposition
  • Categorizing threats
  • Ranking threats
  • Mitigation planning

The next activity in this phase is the security design review. A security subject-matter expert, who is not a member of the core development team, usually carries out the design review with the key objective of ensuring that the design is “secure from the start.” These reviews are typically iterative in nature. They start with the high-level design review and then dive deeply into each component or module of the software.

SDLC Phase Three: Development

[Security in the Development Phase]

Activities in the development phase often generate implementation-related vulnerabilities. Static analysis and peer review are two key processes to mitigate or minimize these vulnerabilities.

Static Analysis

Static analysis involves the use of automated tools to discover issues within the source code itself:

  • Bug finding (quality perspective)
  • Style checks
  • Type checks
  • Security vulnerability review

Automated security review tools tend to have a high percentage of false positives, but they are very efficient at catching the low-hanging vulnerabilities that plague most application software (lack of input validation, SQL injection, etc.). Static analysis cannot, however, detect all types of vulnerabilities or security policy violations—that is where manual peer review becomes important.

Peer Review

A peer review process is far more time-consuming than automated analysis, but it is an excellent control mechanism to ensure the quality and security of the code base. Developers review each others’ code and provide feedback to the owners (original coders) of the different modules so they can make appropriate changes to fix the flaws discovered during the review. Developers can accomplish this with or without the use of specialized tools.

Unit Testing

Unit testing is another key process that many organizations fail to perform regularly but is important from a security and resilience perspective. Unit testing helps to prevent bugs and flaws from reaching the testing phase. Developers can validate certain boundary conditions and prevent vulnerabilities such as buffer overflows, integer over- or underflows, etc., within a module or sub-module of an application. See Figure 4 for a diagram of the security activities in the development phase.

SDLC Phase Four: Testing phase

[Security in the Test Phase]

The test phase is critical for discovering vulnerabilities that were not located and fixed earlier. The first step in the test process is to build security test cases. Sometimes these test cases are documented as the nonfunctional requirements they relate to are collected and analyzed in earlier SDLC phases. A key input to this process is the systems requirements documentation. The (security) test team uses all the assumptions and business processes captured to create several security test cases. Security testers then use these test cases during dynamic analysis of the application. The software is loaded and operated in the test environment and tested against each of the test cases. A specialized penetration testing team is often deployed during this process. These manual security reviews are very effective in discovering business logic flaws in the application.

Dynamic analysis also consists of using automated tools to test for security vulnerabilities. Just like static analysis tools, these tools are also very efficient in ensuring “code complete” scanning coverage and catching high-risk vulnerabilities such as cross-site scripting, SQL injection, etc.

These tests are iterative in nature and result in a list of vulnerabilities that are then ranked for risk and prioritized. The development team then fixes these errors and sends the remediated code back for regression testing.

SDLC Phase Five: Deployment phase

[Security in the Deployment Phase]

The deployment phase is the final phase of the SDLC, when the software is installed and configured in the production environment and made ready for use by its intended audience.

A key part of managing changes is to use a Change Advisory Board (CAB). A CAB offers the multiple perspectives necessary to ensure good decision making. A CAB is an integral part of a defined change management process designed to balance the need for change with the need to minimize inherent risks. For example, the CAB is responsible for oversight of all changes in the production environment. As such, it fields requests from management, customers, users, and IT1.

During the deployment phase, security subject-matter experts who may or may not be part of the change advisory board perform a final security review to ensure that the security risks identified during all the previous phases have been fixed or have a mitigation plan in place. During this phase, the development team coordinates with the release management and production support teams to create an application security monitoring and response plan. The production support team, in conjunction with the network/ security operation center, uses this plan during the operation of the application to manage security incidents and engage the appropriate teams for response and remediation.

Next: Proven best practices for security software development

Proven best practices for secure software development

To aid in designing new high-quality software once both the functional and nonfunctional requirements are approved and understood, application security and resilience principles and best practices are essential tools in developing solutions, since there are no universal recipes for high-quality software development. Principles help designers and developers to “do the right things” even when they have incomplete or contradictory information.

Also see ‘Code security: SAFEcode report highlights best practices’ on CSOonline.com

Principle 1: Apply Defense in Depth

The principle of defense in depth emphasizes that security is increased markedly when it is implemented as a series of overlapping layers of controls and countermeasures that provide three elements needed to secure assets: prevention, detection, and response.

Defense in depth, both as a military concept as well as implementation in software and hardware, dictates that security mechanisms be layered in a manner such that the weaknesses of one mechanism are countered by the strengths of two or more other mechanisms.

Think of a vault as an example. A bank or jewelry store would never entrust its assets to an unguarded safe alone. Most often, access to the safe requires passing through layers of protection that may include human guards, locked doors with special access controls (biometrics such as fingerprints or retinal scans, electronic keys, etc.), or two people working in concert to gain access (dual control). Furthermore, the room where the safe is located may be monitored by closed-circuit television, motion sensors, and alarm systems that can quickly detect unusual activity and respond with the appropriate actions (lock the doors, notify the police, or fill the room with tear gas).

In the software world, defense in depth dictates that you should layer security devices in series that protect, detect, and respond to likely attacks on the systems. The security of each of these security mechanisms must be thoroughly tested before deployment to help gain the needed confidence that the integrated system is suitable for normal operations. After all, a chain is only as good as its weakest link.

For example, it’s a terrible idea to rely solely on a firewall to provide security for an internal-use-only application, since firewalls can be circumvented by a determined and skilled attacker. Other security mechanisms should be added to complement the protection that a firewall affords (intrusion-detection devices, security awareness training for personnel, etc.) to address different attack vectors, including the human factor.

The principle of defense in depth does not relate to a particular control or subset of controls. It is a design principle to guide the selection of controls for an application to ensure its resilience against different forms of attack, and to reduce the probability of a single point of failure in the security of the system.

Principle 2: Use a positive security model (Whitelisting)

The positive security model, often called whitelisting, defines what is allowable and rejects everything that fails to meet the criteria. This positive model should be contrasted with a “negative” (or “blacklist”) security model, which defines what is disallowed, while implicitly allowing everything else.

One of the more common mistakes in application software development is the urge to “enumerate badness” or begin using a blacklist. Like antivirus (AV) programs, signatures of known bad code (malware) are collected and maintained by AV program developers and redistributed whenever there’s an update (which is rather often); this can cause massive disruption of operations and personnel while signature files are updated and rescans of the system are run to detect anything that matches a new signature.

Whitelisting, on the other hand, uses an effort focused on “enumerating goodness,” which is a far easier and achievable task. Programmers can employ a finite list of what values a variable may contain and reject anything that fails to appear on the list. For example, a common vulnerability in Web applications is a failure to check for executable code or HTML tags when input is entered onto a form field. If only alphabetic and numeric characters are expected in a field on the form, the programmer can write code that will cycle through the input character by character to determine if only letters and numbers are present. If there’s any input other that numbers and letters, the program should reject the input and force a reentry of the data.

Principle 3: Fail securely

Handling errors securely is a key aspect of secure and resilient applications. Two major types of errors require special attention:

  • Exceptions that occur in the processing of a security control itself
  • Exceptions in code that are not “security-relevant”

It is important that these exceptions do not enable behavior that a software countermeasure would normally not allow. As a developer, you should consider that there are generally three possible outcomes from a security mechanism:

  • Allow the operation
  • Disallow the operation
  • Exception

In general, you should design your security mechanism so that a failure will follow the same execution path as disallowing the operation. For example, security methods such as “isAuthorized” or “isAuthenticated” should all return false if there is an exception during processing. If security controls can throw exceptions, they must be very clear about exactly what that condition means.

Principle 4: Run with least privilege

The principle of least privilege recommends that user accounts have the least amount of privilege required to perform their basic business processes. This encompasses user rights and resource permissions such as:

  • CPU limits
  • Memory
  • Network permissions
  • File system permissions

The principle of least privilege is widely recognized as an important design consideration in enhancing the protection of data and functionality from faults (i.e., fault tolerance) and malicious behavior (i.e., computer security).

The principle of least privilege is also known as the principle of least authority (POLA).

Principle 5: Avoid security by obscurity

Security by obscurity, as its name implies, describes an attempt to maintain the security of a system or application based on the difficulty in finding or understanding the security mechanisms within it. Security by obscurity relies on the secrecy of the implementation of a system or controls to keep it secure. It is considered a weak security control, and it nearly always fails when it is the only control.

A system that relies on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them. The technique stands in contrast with security by design.

An example of security by obscurity is a cryptographic system in which the developers wish to keep the algorithm that implements the cryptographic functions a secret rather than keeping the keys a secret and publishing the algorithm so that security researchers can determine if it is bullet-proof enough for common security uses.

Principle 6: Detect intrusions

Detecting intrusions in application software requires three elements:

  • Capability to log security-relevant events
  • Procedures to ensure that logs are monitored regularly
  • Procedures to respond properly to an intrusion once it has been detected

Principle 7: Don’t trust infrastructure

You’ll never know exactly what hardware or operating environment your applications will run on. Relying on a security process or function that may or may not be present is a sure way to have security problems. Make sure that your application’s security requirements are explicitly provided though application code or through explicit invocation of reusable security functions provided to application developers to use for the enterprise.

Principle 8: Don’t trust services

Services can refer to any external system. Many organizations use the processing capabilities of third-party partners who likely have different security policies and postures, and it’s unlikely that you can influence or control any external third parties, whether they are home users or major suppliers or partners. Therefore, implicit trust of externally run systems is not warranted. All external systems should be treated in a similar fashion.

For example, a loyalty program provider provides data that is used by Internet banking, providing the number of reward points and a small list of potential redemption items. Within your program that obtains this data, you should check the results to ensure that it is safe to display to end users (does not contain malicious code or actions), and that the reward points are a positive number and not improbably large (data reasonableness).

Principle 9: Establish secure defaults

Every application should be delivered secure by default out of the box! You should leave it up to users to decide if they can reduce their security if your application allows it. Secure by default means that the default configuration settings are the most secure settings possible—not necessarily the most user-friendly. For example, password aging and complexity should be enabled by default. Users may be allowed to turn these two features off to simplify their use of the application and increase their risk based on their own risk analysis and policies, but doesn’t force them into an insecure state by default.

Next: programming best practices

Programming Best Practices

Beyond the principles above, programmers have a special duty to assure that the design specifications are implemented using Defensive Programming, which like a defensive driving, is intended to insulate an application from negligent or willfully damaging activity while it’s in use.

Input validation and handling

input validation approaches

[Figure: Input Validation Approaches]

Improper input handling is one of the most common weaknesses identified across applications today. Poorly handled input is a leading cause of critical vulnerabilities that exist in systems and applications.

“Validation can include checks for type safety (integer, floating point, text, etc.) and syntax correctness. String input should be checked for length (minimum and maximum number of characters) and “character set” validation, while numeric input types such as integers and decimals can be validated against acceptable upper and lower bound of values. When combining input from multiple sources, validation should be performed on the concatenated result and not against the individual data elements alone. This practice helps avoid situations in which input validation may succeed when performed on individual data items but fail when done on a concatenated string from all the sources.” [ref2]

There are several techniques for validating input data. Each has varying levels of security, with the better ones following the practice of using a positive security model, and is illustrated in the Input Validation Approaches figure.

Avoiding Cross-Site Scripting attacks

In cross-site scripting (XSS), the attacker attempts to inject client-side script code on the browser of another user of the application. The injected code submitted will pass through the application and be delivered to the victim user.

The following techniques are used in conjunction with one another to protect an application from XSS attacks:

  • Output filtering

    —Encode fields to escape HTML in output.

    —Most languages provide functions for HTML encoding.

    —Example of HTML entities:

    The ” > ” character is encoded to > or >

    —Force a “charset” encoding in the HTTP response.

    Content-Type: text/html; charset=[encoding]

    meta http-equiv=”Content-Type” (…) charset=[encoding]/

  • Cookie security—Enable the following cookie flags:

    HttpOnly

    Secure

Preventing injection attacks

There are several types of injection attacks: SQL injection, LDAP injection, mail command injection, null byte injection, SSI injection, XPath injection, XML injection, XQuery injection, etc. Here we will examine the techniques to prevent the most pernicious of all content injection attacks—SQL injection.

  • Validate all input parameters accepted by the application.
  • Use a secure way to create SQL queries—”PreparedStatement” or “CallableStatement.”
  • Parameterized queries are not vulnerable to SQL injection attacks even in the absence of input validation.

    —They automatically limit the scope of user input to data, and the input can never be interpreted as part of the SQL query itself.

    —They can perform data type checking on parameter values that are passed to the query object.
  • If you are not using parameterized queries, consider filtering all potentially dangerous characters:

    —Single Quotes

    —Pattern matching characters in LIKE clauses (%,?,[,_)

Authentication and Session Management

There are three well-accepted methods for identifying an individual to a computer system. You can use:

  • Something you know—your password
  • Something you have—a security token device or digital certificate
  • Something you are—your fingerprint or retina scan

Applications that handle very sensitive data should consider using more than one authentication method (“multifactor authentication”)—for example, requiring a security token and a password or PIN number (commonly used in corporate VPN connections from remote sites).

Establishing the user’s identity is key for enforcing privileges and access controls. At various points, the application will require the user to provide some proof of identity:

  • Log-in
  • Password reset
  • Before performing sensitive transactions

An attacker can target each of these in different ways in an attempt to impersonate a legitimate application user. The attacker wants to gain access to the data that a user can access while using the application.

Defensive techniques to counter attacks on log-in functions include:

  • Develop generic “failed log-in” messages that do not indicate whether the username or the password was incorrect.
  • Enforce account lock-out after a predetermined number of failed log-in attempts.
  • Account lock-out should trigger a notification sent to appropriate personnel and should require manual reset (via the Help Desk).
  • Implement server-side enforcement of password syntax and strength (length, character complexity requirements, etc.)

Defenses to counter password reset attacks include:

  • Consider requiring manual password reset. Automated password reset mechanisms can greatly reduce administrative overhead, but they are susceptible to being used for an attack.
  • Require users to answer an open-ended security question to initiate a password reset.
  • Consider using multiple security questions instead of just one.
  • Generate a strong and unique new password once the reset has been performed, or allow to the user to choose one based on the complexity requirements.
  • Force users to change their password as the only function they can access once their password is reset and they log in using it.

Access Control

Access control authorization is the process whereby a system determines whether a specific user has access rights to a particular resource. To decide whether a specific user has or does not have access to a resource, the application needs to know the identity of the user. Many applications use an “all or nothing” approach, meaning that once they are authenticated, all users have equal privilege rights. There are several strategies to implement access privileges and permissions. A common method is to define roles, assign permissions to the roles, and place users in those roles.

  • Implement role-based access control to assign permissions to application users.
  • Perform consistent authorization checking routines on all application pages. (If possible, this should be defined in one location and called or included on each page.)
  • Where applicable, apply DENY privileges last, and issue ALLOW privileges on a case-by-case basis.
  • Never rely on security through obscurity—assume that attackers will be able to guess secret details.
  • Log all failed access authorization requests to a secure location and make sure that these logs are reviewed regularly.

Error Handling

The goal of error handling and messages is to provide feedback when something unusual happens. Error messages appear as two types:

  • User error messages

    —Provide feedback to users

    —Help the user interact with the application properly

    —Cover business logic errors and interaction errors

  • Developer error messages

    —Provide feedback to developers and administrators

    —Help the developers detect, debug, and correct bugs

    —Include technical details, logs, and status messages

Testing

Conclusion

Software security is one of those legacy problems that will not be solved overnight. It requires your active diligence, vigorous participation, ongoing awareness and evangelism, continuing education, and determination to make any dent in the problems.

By addressing all the phases of the Software Development Life Cycle with the principles of secure and resilient software you’re well on your way to improving the overall software security problem and you become a model for your peers to emulate and thus further improve the situation for your organizations, your community and your business sector. Together we can work to solve these problems, learn from one another, and help each other to put an end to the problems that have plagued information technology from the very beginning.

References

  1. Spafford, G., The Importance of Change Advisory Boards, Datamation, 03/10/04, http://itmanagement.earthweb.com/cio/article.php/3323101, retrieved Sep. 26, 2009.
  2. http://cwe.mitre.org/data/definitions/20.html, retrieved Dec. 5, 2009.