• United States



by Adrian Bowles

Metrics for Application Development

Jan 31, 20039 mins
CSO and CISOData and Information Security

RFG believes that IT executives can benefit from a comprehensive metrics initiative that facilitates improvements in quality and, to a lesser extent, productivity. Although the software industry has become enamored with the Software Capability Maturity Model (SW-CMM) from the Software Engineering Institute (SEI), a formal SW-CMM assessment is neither necessary nor sufficient to assure optimal performance. As the SEI sunsets the SW-CMM in favor of its new Capability Maturity Model Integration project (CMMI), IT executives should review and compare available offerings, and plan accordingly.

Business Imperatives:

  • A formal software metrics program typically offers better ROI than investments in other tools perhaps perceived as more trendy. The biggest returns typically come early investing in a plan that helps to identify bad requirements will payoff more than one aimed at improving programmer productivity. IT executives should be concerned with IT-business alignment and improving effectiveness, and consider implementing a metrics program that addresses the full life cycle of each development project.
  • Large organizations can expect to see a 20-to-1 variance in the productivity of their developers. Performance of individuals should therefore be measured carefully, to accurately provide rewards and guide remediation where appropriate. IT executives concerned about maximizing return on their human capital and retaining top performers should investigate and institute appropriate metrics.
  • Changes in development methods and tools over the past decade make some older metrics irrelevant. Today, the impact of reusable components and knowledge management systems may be far more significant than language levels and architecture choices. IT executives with measurement systems in place should review their choice of metrics to assure that they are getting the best possible return on these investments.

Since its first publication in 1991, the SW-CMM has been regarded as the “gold standard” for software development process assessments. Recognized around the world, the model’s five levels of maturity have taken on outsized significance. In the beginning, when most firms conducted assessments, a Level 2 or 3 score was seen as respectable. With more industry experience and investment, it is not uncommon to see a relatively young firm advertising attainment of Level 5 (or “optimizing”), the model’s highest level, as one measure of the firm’s capabilities. This is particularly the case for government contractors (the original target for SW-CMM) and offshore outsourcing firms that use the measure to allay concerns about remote development.

The SW-CMM was joined in time by other SEI projects for process assessment. Examples include the Systems Engineering Capability Model (EIA/IS-731), which replaced the Systems Engineering Maturity Model (SE-CMM), and the Software Acquisition CMM (SA-CMM). Other additional tools include the Integrated Product Development Capability Maturity Model (IPD-CMM) and the People Capability Maturity Model (P-CMM).

The new CMMI Product Suite is the result of integrating the SW-CMM, EIE/IS-731, and IPD-CMM. The SEI ended all further development and support of the SW-CMM when CMMI version 1.1 was published in 2001. The Institute of Electrical and Electronics Engineers (IEEE) and Electronic Industry Alliance (EIA), among others, have developed alternative standards for parts of the life cycle, but the best features of each are likely to end up in CMMI as it evolves.

The CMMI is a good starting point for any investigation of current-generation metrics and practices. Most organizations will benefit from critically evaluating the components of CMMI. However, a SW-CMM assessment can easily cost in excess of $70,000 per line of business (LOB), and the cost for a CMMI initiative is likely to exceed $100,000 per LOB.

Organizations working under government requirements or performing contracted work with specified CMMI levels can derive direct benefit from CMMI. Otherwise, it may be more prudent to select an appropriate subset of the CMMI process, and to augment that subset with practices tailored to the enterprise’s own requirements. As a starting point, firms should also consider approaches such as the Software Productivity Research (SPR) method developed by Capers Jones. In general, an effective set of processes and supporting metrics for internal development efforts must address the following issues.

  • Validation ensuring that the right systems are built
  • Verification ensuring that the systems are built right
  • Productivity optimizing resource utilization

In addition, project management and general quality assurance practices, such as those espoused by renowned authorities the W. Edwards Deming Institute and Juran Institute, Inc., will be necessary complements to the software-oriented practices.


Validation answers the most important question in software development assessment: “Are you building the right systems?” If an IT team doesn’t get this right, nothing else matters. Most modern requirements tracking tools can maintain the associations between requirements and development deliverables. Tools alone, however, offer limited utility. Formal processes such as CMMI tend to focus on requirements management, but the issue for most organizations isn’t how to represent the requirements, it is how to effectively elicit them.

This process should be repeatable, so that different analysts speaking to the same user will elicit approximately the same set of requirements. Processes for capturing requirements have been defined in several texts, and formalized in the Joint Application Design (JAD) movement. Regardless of the approach taken, it is critical to include business users in the training, to ensure that they can effectively communicate their needs to the developers. This small investment in training may actually provide the biggest payoff in terms of a reduction in re-work and abandoned efforts.

The Unified Modeling Language (UML) now under the auspices of the Object Management Group (OMG) offers Use Cases as a graphical mechanism to specify the implications of requirements to user constituent groups. The UML is now a de facto standard, having replaced a plethora of notation methods that emerged during the height of the computer-aided software engineering (CASE) “wars” of the late 1980s and early 1990s. At this point, adopting the UML as a system representation is a conservative move, as it will likely be supported by a variety of tools for years to come.

IT-business alignment challenges in software development are generally traceable to validation problems. Common error sources are different vocabularies, including different definitions and/or usage for common terms, and lack of a shared understanding of business goals. During the past few years, however, tools have emerged that can ensure that priority is given to requests that offer the best financial return and support management goals. Leading vendors in this space include ProSight, Inc. and United Management Technologies. Both offer solutions that enable association of business objectives with software requirements. The impact of a change in business goals can then be quickly matched with the systems under development.


Verification is a process that ensures that developers build systems correctly. Once system-builders know what users want and need, a visible tracking mechanism is critical. Such a tool helps to ensure that every requirement tracks to a defined segment of the design, and ultimately the code. The system must also track backwards, so that every unit of design/code can be mapped to one or more requirements. As personnel that had no involvement with their creation ultimately maintain most systems, it is critical that these bi-directional links be maintained. The first time a maintenance programmer comes across an undocumented decision or suspect code is likely the last time that maintainer voluntarily documents his/her own enhancements or corrections. The value of the documentation quickly goes to zero, and the value of the code is often not far behind.

For validation and verification purposes, inspections and walkthroughs are effective techniques for early error detection. In fact, it may be argued that detection before design and coding is actually error prevention, which is far less expensive than correction. Some measurable attributes of the early design documents, such as path and interface complexity, presage more expensive maintenance efforts. A comprehensive development plan should include a step to monitor designs and reject those with undue complexity. Reworking the design will almost always be less expensive than living with the complex code.


Productivity and metrics have evolved far beyond their humble beginnings measuring lines of code (LOC) and defect rates. As the cost of developers surpassed the cost of the hardware and software necessary to support them, it became more important to know “How much can/does a developer do?” The management challenge now is to raise the performance of the group without unduly taxing the “stars,” and to identify and reward stellar performance.

Improving return on human capital requires more than measures of quantity and quality of code produced, it may require “knowledge” of the nature of the products and how teams use them downstream. Metrics must also be adjusted to compensate for the use of component libraries, as the impact of contribution to these libraries may not be immediately visible. For example, a developer who creates a component that gets widespread use in multiple projects should be rewarded, as should developers who reuse proven, secure components rather than hand-code equivalents.

One approach is to develop a scorecard that takes into account the attributes of the local development environment when deciding what to monitor. For example, if the most difficult tasks are routinely assigned to senior people who deliver a consistently high level of work, the scorecard must weight this performance accordingly. A junior person who delivers more lines of simple code using tools that generate much of the output must be able to compare his own performance along some objective measure using the scorecard.

RFG believes that IT executives determined to run “best-in-class” organizations must establish benchmarks for validation, verification, and productivity, and practices to monitor and improve performance according to established criteria. A comprehensive and modern metrics plan is essential to attaining these goals. Where feasible, IT executives should implement a metrics program that starts small in a specific LOB, then test out the approach, review results, and then market successes to other LOBs across the enterprise.

RFG analyst Adrian Bowles wrote this Research Note. Interested readers should contact RFG Client Services to arrange further discussion or an interview with Dr. Bowles.