In this chapter, the authors detail the code review process, key features of automated source code scanning tools, and more.
In Chapters 6 and 7 we examined specific techniques and approaches to developing resilient software for a variety of platforms and specialized applications with a focus on preventing the most common errors and problems that lead to security incidents and data losses.
In Chapter 8 we'll begin exploring how to test the resilience of custom application code and find ways to further improve it. Topics covered in Chapter 8 include:
- The true costs of waiting to find and eradicate software flaws
- Manual and automated source code review techniques
- Implementing code analysis tools
- Penetration testing
- Black box testing
- Quality assurance testing
8.1 Fixing Early Versus Fixing After Release
A study by Gartner, IBM, and The National Institute of Standards and Technology (NIST) revealed that "the cost of removing an application security vulnerability during the design/development phase ranges from 30-60 times less than if removed during production."1 The key objective of integrating security processes with the software development life cycle (SDLC) is to ensure that we detect and fix security vulnerabilities early.
Many organizations simply do not know the costs of finding and fixing software defects, because they do not track or measure that work. If they did, they might be shocked to learn the real costs of developing software. There are direct and indirect costs to finding and fixing security bugs. If a vulnerability is found and exploited in a production application, the brand damage that results cannot be easily measured or repaired.
There are direct costs that we can certainly measure. One of the easiest to measure is the average cost to code a fix:
Average cost to code a fix = (number of developer man-days * cost per man-day) ÷ number of defects fixed
Apart from this cost, there are additional costs we need to consider:
- System test costs
- Implementation costs
- System costs
- Postproduction costs
- Other costs, such as project management, documentation, downtime costs, etc.
These costs can skyrocket when a mission-critical or high-profile application is involved, and changes to it must not interfere or even be seen by customers using the application over the Internet—e.g. an e-banking site.
Therefore, it is far more sensible for enterprises to find and fix application software defects before they are released into the production environment. While threat modeling and design and architecture reviews can help to assure that there are no high-level defects at the design level, security testing ensures that there are no defects when implementing that secure design.
There are several techniques to conducting thorough security testing of an application. They range from simple developer-driven unit tests to highly focused penetration testing by a specialized team of security experts.
8.2 Testing Phases
Typical software development testing occurs in multiple iterative phases, with the completion of one signaling the beginning of the next. Each of these phases has room for security and resilience testing activities and is described within each phase:
- Unit testing
- Integration testing
- Quality assurance testing
- User acceptance testing
8.3 Unit Testing
Developers drive and conduct unit tests on the code that they write and own. Unit testing is a best practice from an overall code quality perspective and has some security advantages. Unit testing helps prevents defects from finding their way into the larger testing phases. Since developers understand their own code better than anyone else does, simple unit testing ensures effectiveness of the test.
Developers need to make sure that they document what they test, since it is very easy to miss a test that is performed by hand. Some of the key issued a developer can find in unit testing are
- Boundary conditions
- Integer over/underflows
- Path length (URL, file)
- Buffer overflows
- When writing code in the C language and coding their own memory management routines, all arithmetic pertaining to those should be tested as well.
Developers can also conduct direct security testing using fuzzing techniques. Fuzzing, in simplest terms, is sending random data to the application program interfaces (APIs) that the program relies on and determining whether, when, and how it might break the software. Fuzzing is usually done in several iterations (100,000+) and can be made smarter by doing targeted variations in key parts of data structures (length fields, etc.). Fuzzing is a shockingly effective test that most developers could use. It is one of the cheapest, fastest, and most effective ways to identify security bugs, even in organizations that have mature SDLC security and resilience processes.
8.4 Manual Source Code Review
Manual source code reviews can commence when there is sufficient code from the development process to review. The scope of a source code review is usually limited to finding code-level problems that could potentially result in security vulnerabilities. Code reviews are not used to reveal:
- Problems related to business requirements that cannot be implemented securely
- Issues with the selection of a particular technology for the application
- Design issues that might result in vulnerabilities
Source code reviews typically do not worry about the exploitability of vulnerabilities. Findings from the review are treated just like any other defects found by other methods, and they are handled in the same ways. Code reviews are also useful for non-security findings that can affect the overall code quality. Code reviews typically result in the identification of not only security problems but also dead code, redundant code, unnecessary complexity, or any other violation of the best practices that we detailed in Chapter 4. Each of the findings carries its own priority, which is typically defined in the organization's "bug priority matrix." Bug reports often contain a specific remediation recommendation by the reviewer so that the developer can fix it appropriately.
Manual code reviews are expensive because they involve many manual efforts and often involve security specialists to assist in the review. However, manual reviews have proven their value repeatedly when it comes to accuracy and quality. They also help identify logic vulnerabilities that typically cannot be identified by automated static code analyzers.
Source code reviews are often called "white box" analysis. This is because the reviewer has complete internal knowledge of the design, threat models, and other system documentation for the application. "Black box" analysis, on the other hand, is performed from an outsider's view of the application with no access to specifications or knowledge of the application's inner workings. "Gray box" analysis is somewhere in between white box and black box analysis, as you will see later in this chapter.
8.5 The Code Review Process
The code review process begins with the project management team and the development team making sure that there is enough time and budget allocated in the SDLC to perform these reviews. Tools that are helpful in performing these reviews should be made available to all developers and reviewers.
The code review process consists of four high-level steps as illustrated in Figure 8.1.
The first step in the code review process is to understand what the application does (its business purpose), its internal design, and the threat models prepared for the application. This understanding greatly helps in
Figure 8.1 Code Review Process
identifying the critical components of the code and assigning priorities to them. The reality is that there is not enough time to review every single line of code in the entire application every time. Therefore, it is vital to understand the most critical components and ensure that they are reviewed completely.
The second step is to begin reviewing the identified critical components based on their priority. This review can be done by a different team of developers who were not originally involved in the application's development or by a team of security experts. Another approach is to use the same team of developers who built the application to perform peer reviews of each other's code. Regardless of how code reviews are accomplished, it is vital that the review cover the most critical components and that both developers and security experts have a chance to see them. All the identified defects should be documented using the enterprise's defect management tool and assigned the appropriate priority. The reviewers must document these defects along with their recommended fix approaches to make sure they do not creep into final production code.
The third step of a code review is to coordinate with the application code owners and help them to implement the fixes for the problems revealed in the review. These may involve the integration of an existing, reusable security component available to developers (e.g., the ESAPI framework as described in Chapter 6), or it may require simple to complex code changes and subsequent reviews.
The final step is to study the lessons learned during the review cycle and identify areas for improvements. This makes sure the next code review cycle is more effective and efficient.
Some of critical components that require a deep-dive review and analysis are
- User authentication and authorization
- Data protection routines
- Code that receives and handles data from untrusted sources
- Data validation routines
- Code involved in handling error conditions
- Usage of operating system resources and networks
- Low-level infrastructure code (which does its own memory management)
- Embedded software components
- Usage of problematic/deprecated APIs
Since manual analysis is time-consuming and expensive, enterprises should also implement automated source code analysis tools to complement, but not replace, manual reviews.
8.6 Automated Source Code Analysis
Medium-to-large enterprises cannot afford to complete a manual code review on every single application every single time. Instead, many rely on automated source code analyzers to help.
Typical software development priorities are schedule, cost, features and then quality—in most cases, in that order. The pressure from a time-to market perspective can negatively affect software quality and resilience and sometimes causes the postponement of adding features to the software.
As Phillip Crosby said, "Quality is free," and this is most true of the software development process. However, managers in organizations that do software development often believe otherwise: They appear to think that a focus on software quality increases costs and delays projects. Studies of software quality (not necessarily software security) have consistently proven this belief wrong. Organizations with a mature SDLC process usually face little extra overhead because of software quality and resilience requirements, and the corresponding cost savings from process improvements far exceed the cost of added developer activities.
Static source code analyzers support the secure development of programs in an organization by finding and listing the potential security bugs in the code base. They provide a wide variety of views/reports and trends on the security posture of the code base and can be used as an effective mechanism to collect metrics that indicate the progress and maturity of the software security activities. Source code analyzers operate in astonishingly quick time frames that would take several thousand man-hours to complete if they were done manually. Automated tools also provide risk rankings for each vulnerability, which helps the organization to prioritize its remediation strategies.
Most important, automated code analyzers help an organization uncover defects earlier in the SDLC, enabling the kinds of cost and reputation savings we discussed earlier in this chapter.
8.6.1 Automated Reviews Compared with Manual Reviews
Although automated source code analyzers are strong at performing with low incremental costs, are good at catching the typical low-hanging fruits, have an ability to scale to several thousands of lines of code, and are good at performing repetitive tasks quickly, they also have a few drawbacks.
Automated tools tend to report a high number of false positives. Sometimes it will take an organization several months to fine-tune the tool to reduce these false positives, but some level of noise will always remain in the findings. Source code analyzers are poor at detecting business logic flaws. Some of the other types of attacks that automated analysis cannot detect are complex information leakage, design flaws, subjective vulnerabilities such as cross-site request forgery, sophisticated race conditions, and multistep-process attacks.
In a research paper written by James Kupsch and Barton Miller of the University of Wisconsin, the authors presented the results of their efforts to evaluate and quantify the effectiveness of automated source code vulnerability assessment tools by comparing such tools to the results of an in- depth manual evaluation of the same system.3 The key findings were the following.
8.6.2 Commercial and Free Source Code AnalyzersHere is a sampling of some of the available source code analyzers, both commercial (with dedicated support) and free or open-source software.
220.127.116.11 Commercial—MultilanguageCommercially available multilanguage source code analyzers include the following.
- Armorize CodeSecure—Appliance with Web interface and built-in language parsers for analyzing ASP.NET, VB.NET, C#, Java/J2EE, JSP, EJB, PHP, Classic ASP, and VBScript (http://www.armorize. com/?link_id=codesecure)
- Coverity Software Integrity—Identifies security vulnerabilities and code defects in C, C++, C#, and Java code (http://www.coverity. com/products)
- Compuware Xpediter—For mainframe-based applications; offers analysis of COBOL, PL/I, JCL, CICS, DB2, IMS, and other popular mainframe languages (http://www.compuware.com/solutions/ xpediter.asp)
- Klocwork Insight and Klocwork Developer for Java—Provides security vulnerability and defect detection as well as architectural and build-over-build trend analysis for C, C++, C#, and Java (http:// www.klocwork.com/products)
- Ounce Labs—Automated source code analysis that enables organizations to identify and eliminate software security vulnerabilities in languages including Java, JSP, C/C++, C#, ASP.NET, and VB.NET (http://www.ouncelabs.com/products)
18.104.22.168 Open Source—MultilanguageHere are a few of the open-source products for source code analysis.
- O2—A collection of open-source modules that help Web application security professionals maximize their efforts and quickly obtain high visibility into an application's security profile with the objective of "automating application security knowledge and work- flows"
- RATS (Rough Auditing Tool for Security)—Can scan C, C++, Perl, PHP, and Python source code. (http://www.fortify.com/securityresources/ rats.jsp)
22.214.171.124 .NET Support
- FxCop—Free static analysis for Microsoft .NET programs that compile to CIL; stand-alone and integrated in some Microsoft Visual Studio editions (http://msdn.microsoft.com/en-us/library/ bb429476%28VS.80%29.aspx)
- StyleCop—Analyzes C# source code to enforce a set of style and consistency rules; can be run from inside Microsoft Visual Studio or integrated into an MSBuild project (http:// code.msdn.microsoft.com/sourceanalysis)
126.96.36.199 Java Support
- Checkstyle—Besides some static code analysis, can be used to show violations of a configured coding standard (http://checkstyle. sourceforge.net)
- FindBugs—An open-source static byte code analyzer for Java (based on Jakarta BCEL) from the University of Maryland (http:// findbugs.sourceforge.net)
- PMD—A static rule set-based Java source code analyzer that identifies potential problems (http://findbugs.sourceforge.net)
Among the tools listed, we will examine in detail Fortify 260 as a commercial tool and O2 as an open-source tool.
8.6.3 Fortify 360
Fortify 360 provides the critical analytic, remediation, and management capabilities necessary for a successful, enterprise-class software security assurance (SSA) program.
- Identification: Comprehensive root-cause identification of more than 400 categories of security vulnerabilities in 17 development languages
- Remediation: Brings security, development, and management together to remediate existing software vulnerabilities
- Governance: Monitors organization-wide SSA program performance and prevents the introduction of new vulnerabilities from internal development, outsourcers, and vendors through automating secure development life-cycle processes
- Application defense: Contains existing vulnerabilities so they can't be exploited
- Compliance: Demonstrates compliance with government and industry mandates as well as internal policies4
- O2 Tool—XRules—O2's eXtended rules environment, which allows the execution and editing of complex security analysis work- flows
- O2 Tool—SpringMVC—Support for Spring's Framework MVC
- O2 Tool—RulesManager—Powerful viewer and editor for Ounce's Rules
- O2_Tool_FindingsViewer—Powerful filter and editor for Ozasmt files
- O2_Tool_CirViewer—View and create (for .NET) CIR (Common Intermediate Representation) objects
- O2_Tool_SearchEngine—RegEx text search-based GUI
- O2_Tool_CSharpScripts—Edit and debug C# scripts
- O2_Tool_DotNetCallbacksMaker—Automatically create Ounce Rules for .NET callbacks
- O2_Tool_FindingsQuery—Filter Ozasmt files using LAMDA-like queries
- O2_Tool_JavaExecution—Write O2 scripts in Java
- O2_Tool_JoinTraces—Join traces (e.g., .NET and Web and Web Services layer)
- O2_Tool_Python—Write O2 scripts in Python
- O2_Tool_O2Scripts—O2 scripts editor (includes O2 Object Model)
- O2_WebInspect(PoC of Integrating Ounce's & WebInspect's assessment data)
The architecture and context of how Fortify 360 is deployed and operated is shown in Figure 8.2.
Fortify 360's static source code analyzer (SCA) provides root-cause identification of vulnerabilities in source code. SCA is guided by a comprehensive
Figure 8.2 Fortify 360 Architecture
set of secure coding rules and supports a wide variety of languages, platforms, build environments, and integrated development environments (IDEs), such as Eclipse, Visual Studio, and others.
Figure 8.3 is screenshot of the results of a Fortify 260 source code analysis done on WebGoat, a deliberately insecure J2EE Web application that is maintained by OWASP and is designed to teach Web application security lessons.
188.8.131.52 O2—OunceOpenO2 originated from work conducted by the OunceLabs Advanced Research Team (ART). O2 aims to push to the limit the power of multiple static analysis engines. These tools have been developed by security professionals for security professionals and are intended to help automate a security consultant's brain.
Figure 8.3 Fortify Audit Workbench
Following is a list of O2 modules:
Figure 8.4 lists all the modules and their maturity to date.
Figure 8.4 O2 Modules
Figure 8.5 is a screenshot of the results from the O2 source code analysis conducted on WebGoat.
While we do not endorse or recommend any particular automated tool, we do recommend that all organizations perform an objective evaluation of
Figure 8.5 O2 WebGoat Assessment
available commercial software and free software to determine the best fit for their development language(s) and SDLC methodology. Organizations can also use a combination of tools to provide a high level of assurance in the security scanning process.