Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Metrics to boost throughput and software quality for agile teams

public://pictures/Robert-Scheier-Principal-Bob-Scheier-Associates.png
Robert L. Scheier Principal, Bob Scheier Associates
 

When development teams transform to agile, they're usually hoping that the rapid series of quick code reviews associated with the methodology will speed delivery and assure they meet user needs. With the growing importance of applications in business, some developers are also thinking about how to boost software quality.

To meet these dual needs, reconsider what software quality metrics you gather, how you interpret those metrics, and how you share them with both the agile team and your business customers, said Todd DeCapua, chief technology evangelist for application development management (ADM) at HP. In his previous roles, DeCapua's teams delivered 25 percent annual increases in code quality and 100 percent increases in throughput.

What metrics to gather

Historically, said DeCapua, teams measure only the number of completed test cases and the number of defects found. They generally don't focus on the key quality metrics before the code went into production, which is what determines user satisfaction and business benefits.

What DeCapua's team needed and developed was a running composite quality metric throughout the development and test process that represented the overall release quality, such as "Release 2015-4-b is a B (92/100) and Amber." This specific release scores only a B on the quality scale and would appear in amber rather than green on a software quality dashboard. Such a clear and automated indicator of release quality can help predict the impact that the quality of each piece of code will have on the production application.

DeCapua also pushed business stakeholders to agree on consistent definitions of quality, in order to make more clear which quality metrics were important and why. The definitions his team came back with included:

  • Code integrity
  • Customer and operation impact of defects
  • Date of delivery
  • Quality of communication
  • System ability to meet service levels

This helped developers, testers, and project managers focus their efforts on the most critical defects.

The team also refined its definition of a test case, which previously had been used in the context of traditional, mostly manual testing. Over time, the team added more instrumented and automated tests in areas such as:

These tests also captured information such as which mobile devices were being used to access applications, user location, and network type, along with the work flows in which users were engaged.

All these metrics, including percentage of code coverage and code tests passed, provided all stakeholders with an ongoing gauge of overall release quality. DeCapua also created software quality gates through which he could measure and promote builds throughout the life cycle to maximize the value of automation and increase efficiency and results.

Canceled defects, which represented nearly 20 percent of all reported defects, is another useful metric that helped reduce wasted time. Many of the canceled defects were duplicates, which spurred the QA team to apply additional controls to test and report features to eliminate duplicates earlier in the process.

When to gather the metrics

Metrics need to be gathered throughout the life cycle to enable real-time feedback and visibility to all stakeholders. In DeCapua's experience, the scrum masters found that the four-week sprint cycle created a three-week maximum boundary for fix time. Additionally, the team's determination that 63 percent of coding defects were severe in nature supported the decision to implement new definitions for degrees of severity.

This allowed the development and test teams to adjust the length and organization of sprints so they could continuously increase quality over time. This quality over time metric was extremely valuable in supporting the team's continuous improvement efforts.

How to share the metrics

Previously, the execution of user acceptance testing (UAT) occurred late in the four-week sprint cycle, giving the team limited time to make significant changes prior to the release. As a result of increased visibility provided by enhanced quality metrics and continuous feedback, the team was able to integrate the UAT into the process, increasing collaboration and accountability while shortening the cycle with higher quality.

Additionally, the post-production defects were listed in five different systems and weren't linked back to the original build or release, making it difficult to associate specific post-production defects with a specific release. This delayed the process of finding and fixing the defects and the release of the features affected, along with other concurrent branches of code that needed to have these fixes merged.

DeCapua's team made the latest software quality metrics available in real time through a wiki to all business and developer stakeholders. These results, including the UAT results and a nightly automated email of key metrics, became part of the discussion at the daily stand-up progress reviews for each agile scrum team and all project and business leads. Tying post-production defects to specific projects or releases on a daily basis helped everyone identify gaps in the development and testing processes that needed to be closed to improve code quality and make daily or even hourly decisions on how best to close them.

To give the developers time to make these improvements, DeCapua allocated 10-15 percent of the story points assigned to each team toward continuous improvement opportunities.

You get what you measure

With agile projects, it's easy to get caught up in the challenge of meeting deadlines and refining features and functions on the fly with users. Stepping back to refine what development metrics you're tracking, including how you define them and how you share them, can deliver huge rewards where it counts: the quality of your production code.

Keep learning

Read more articles about: App Dev & TestingAgile