Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Beyond fast: Understanding the performance engineering metrics that really matter

public://pictures/Todd-DeCapua-CEO-DMC.png
Todd DeCapua Executive Director, JP Morgan
 

In a sketch from NBC's "Saturday Night Live," a track coach gathers his team in the locker room for a pep talk during a big meet. The team is trailing, but the meet isn't over yet. So the coach starts by talking to the athlete who is about to run 1,500 meters.

"When you hear the starting gun, run fast," he explains. "Then, keep going really fast. And then at the end, when you get near the finish line, go really, really fast."

Then he has some big advice for the 100-meter race, one he admits is difficult.

"You know my philosophy?" he asks. "Run really fast."

Many naive observers often take the same approach to performance engineering: it's simply a matter of making sure the systems run fast. If possible, make them run really fast. When in doubt, just make them run really, really fast. And if that doesn't work right away, throw money at the problem by buying more hardware, thinking that will make the system go really fast.

But just as there's more to winning a track meet than being fast, there's more to building a constellation of quick, efficient web servers and databases than being fast. Just as athletes can't win without a sophisticated mixture of strategy, form, attitude, tactics, and speed, performance engineering requires asking the right questions up front, then having a good collection of metrics and tools to deliver the desired business results. When they're combined correctly, the results are systems that satisfy both customers and employees, enabling everyone on the team to win.

Performance as a team sport

Over the last five years, organizations have started to define and embrace the capabilities of performance engineering, recognizing that their systems are growing so complex that it's not enough to simply tell the computers or the individuals behind them to "run fast." This capability must be built into the organization's culture and behavior, and it must include activities for developers, database administrators, designers, and all stakeholders—each coordinating to orchestrate a system that works well, starting early in the lifecycle and building it in. Each of the parts may be good enough on its own, but without the attention of good engineering practices, they won't work well enough together.

Hewlett Packard Enterprise has been working to support performance engineering in all organizations. In 2015, it contracted YouGov, an independent research organization, to survey 400 engineers and managers to understand how organizations are using tools and metrics to measure and evolve their performance engineering practices. The survey was conducted blind, so that no one knew that Hewlett Packard Enterprise commissioned it.

The sample consisted of 50 percent performance engineers and performance testers, 25 percent application development managers, and 25 percent IT operations managers. All came from companies with at least 500 employees in the United States. The results reveal a wide range of techniques and broad approaches to performance engineering and some of the practices through which organizations are using tools and metrics.

The term "performance engineering" is relatively new to many in the software industry, and to businesses in general. "Performance engineering" doesn't necessarily refer to a specific job, such as a performance engineer. More generally, it refers to the set of skills and practices that are gradually being understood across an organization's various teams, focused on achieving higher levels of performance in technology, in the business, and for end users, from the beginning.

Performance engineering tools

The survey asked, "When you look to the future of performance engineering, what types of tools do you and your stakeholders plan to acquire?" Fifty-two percent of large companies (those with 10,000+ employees) indicated "more enterprise and proven" tools; 37 percent of the larger companies said they expected "more open source and home-grown"; and the remaining 11 percent said they were planning "more hybrid of open source and enterprise." The responses from companies of different sizes followed a similar pattern, but with a bit more balance.

Performance engineering tool

When the results were analyzed based on roles, the majority fell to planning to acquire "more enterprise and proven" tools, with those identifying as "performance engineer/performance tester" (41 percent), application development manager (44 percent), and IT operations manager (51 percent).

Performance engineering tools

When it comes to testing, an increasing share of companies are concentrating on burst testing to push their software closer to the breaking point. They're spinning up a large number of virtual users and then pointing them at the systems under test in a large burst over a period of time. This simulates heavy traffic generated from sales, promotions, big events, or retail days like Black Friday or Cyber Monday, when a heavy load can wreak havoc on a system.

Burst testing

One of the most important options among tools like the ones cited above is the ability to deploy an army of machines to poke and prod at an organization's systems. The cloud is often the best source for these machines because many modern cloud companies rent virtual machines by the minute. Those working on performance tests can start up a test for a short amount of time and only pay for the minutes they use.

The value of the cloud is obvious in the answers to the questions about the average size and duration of a load test. Only three percent of respondents reported testing with fewer than 100 simulated users. At least 80 percent of the respondents used 500 or more users, and 14 percent wanted to test their software with at least 10,000 users. They feel that this is the only way to be prepared for the number of real users coming their way when the software is deployed.

load test size

Growth in load testing points to the cloud

This demand will almost certainly increase. When asked how big they expect their load tests to be in just two years, 27 percent said that they expect that they'll need at least 10,000 simulated users. They mentioned much larger numbers, too. Eight percent predict that they'll be running tests with more than 100,000 simulated users, and two percent could foresee tests with 500,000 users or more.

While the number of simulated users is growing, duration isn't long enough to make a dedicated test facility economical. The tests are usually not very long; only eight percent reported running tests that routinely lasted more than 24 hours. Most of the survey respondents (54 percent) said that their tests ran between 4 and 12 hours.

Test duration

The largest companies are also the ones who are most likely to be using the cloud. Only nine percent said that they don't use the cloud for testing, typically because their security policies didn't permit them to expose their data to the cloud.

Cloud testing

Performance engineering metrics

One of the bedrock rules of leading an organization is that you can't manage what you can't measure. Many of the simplest numbers, like mean response time, are common. But the field is rapidly evolving as teams develop new performance engineering metrics that do a more thorough job of capturing how the software and capabilities delivered to the end users are helping the company succeed.

To understand how companies are measuring their quality and performance success, the survey listed seven metrics and asked respondents to rate how much confidence they had in each. Some of the measurements were objective, derived from log files or monitoring software, while others were more subjective and collected from surveys.

The answers were analyzed to find which ones generated the highest degree of confidence by counting the percentage of responses where the respondent selected the highest-ranked choice. All the metrics fell within a narrow range, with only a few percentage points separating the highest-rated metric from the lowest. Here's the list, ordered from most to least confidence:

  1. Release quality (49 percent)
  2. Throughput (46 percent)
  3. Workflow and transaction response time (46 percent)
  4. Automated performance-regression success rate (42 percent)
  5. Forecasted release confidence/quality level (41 percent)
  6. Breaking point versus current production as a multiplier (39 percent)
  7. Defect density (38 percent)

There's not much difference in confidence between the top of the list and the bottom.

The responses are more revealing when broken out according to the role identified by respondents. Managers of application development and IT operations all reported more confidence in the measures than did performance engineers/performance testers. In the case of measurements of throughput, for instance, 63 percent of the IT operations managers had high confidence, but only 36 percent of performance engineers and testers said the same.

It's hard to guess the reasons for this divergence. There are probably several, but it's possible that those working within engineering disciplines have much more direct experience with measurement, and understand its limitations. It could also be purely social—engineers are notoriously cautious about making enthusiastic endorsements.

This same divergence is seen in many of the other responses. When asked which skills were important for a performance engineering team, managers always gave the skills in question higher ranks than did the engineers. When asked to rank the importance of the ability to "Communicate and show results (in hard metrics) of the business impact," 50 percent of the IT operations managers said it was very important, but only 39 percent of performance engineers and performance testers responded the same way.

Changes underway in performance engineering

One of the most important results from the study was the broad range of answers. No particular tool or metric dominated the discussion. The narrow range between the highest-ranked metrics and the lowest indicates that the field is evolving rapidly. The results don't reflect a disagreement as much as the fact that many companies are taking different approaches and experimenting with a wide variety of options.

This drive to explore new metrics and find better ways of understanding how software is succeeding (and failing) is going to continue and even grow more intense. Software engineers understand that it's not enough to simply focus on the narrow job of going fast, as the track coach suggests. The challenge is capturing just how the software is helping the company, its employees, and its customers. If they succeed, then the software is a success.

There are big differences in the ways companies are approaching the challenge. They're mixing enterprise, commercial, and open-source tools, and using a wide range of metrics to understand their results. We've seen key metrics that are accepted by all three groups of respondents—metrics that all businesses can start using today. However, there's nothing like enabling the team to also measure what matters to them, because what matters to your team may matter deeply to your success.

Keep learning

Read more articles about: App Dev & TestingTesting