Micro Focus is now part of OpenText. Learn more >

You are here

You are here

5 best practices for realistic performance testing

public://pictures/anitsan.png
Amichai Nitsan Senior Architect, HPE Software
 

Everyone knows performance testing is important, but how do you make your tests realistic?

I had the honor of addressing this topic at the Velocity Conference in New York. Here are five tips I shared.

1. Set a baseline for user experience

Performance is not merely a question of load times and application responsiveness. What you really want to know is: How satisfied are my users?

Our team gave this measurement a name: FunDex. The higher the FunDex is, the more positive the user experience is. Improving performance gets you FunDex points, but app crashes and hogged resources take them away. Put another way, decreasing page load time at the expense of stability is not a sustainable solution.

We take millions of data points continuously to track FunDex over time, giving the development team a rolling look at whether changes to the code are improving or detracting from the user experience. Whether you have a single data point like FunDex or not, the point is that your performance testing strategy needs to be more holistic than simply looking at page load times. It needs to consider the entire user experience.

2. Create realistic tests

Throwing thousands or millions of clients at a server cluster may stress-test your environment, but it is not going to accurately measure how your app or site performs in a real-world scenario. There are two major issues you need to consider when setting up your testing environment.

First, the testbed must reflect the variety of devices and client environments being used to access the system. Traffic is likely to arrive from hundreds of different types of mobile devices, web browsers, and operating systems, and the test load needs to account for that.

Also, this load is far from predictable, so the test needs to be built with randomness and variability in mind, mixing up the device and client environment load on the fly. By continuously varying the environment and the type of data that is passed, the development organization faces fewer surprises down the road after the application is put into production.

Second, the simulation can't start from a zero-load situation. Many test plans start from a base, boot-up situation, then begin adding clients slowly until the desired load is reached. This simply isn't realistic and provides the testing engineer an inaccurate picture of system load. As applications are updated and rolled out, the systems they're running on will already be under load. That load may change over time, but it won't go to zero and slowly build back up.

3. Know the difference between measured speed and perceived performance

Performance may mean one thing to you, but another thing to your user. If you are simply measuring load times, you're missing the big picture. Your users aren't waiting for pages to load with stopwatches in hand. Rather, they are waiting for the app to do something useful.

So how quickly can users get to useful data? To find out, you need to include client processing time in your measure of load times. It is easy to "cheat" on a performance test by pushing processing work from the server to the client. From the server standpoint, this makes pages appear to load more quickly. But forcing the client to do extra processing may actually make the real-world load time longer.

It isn't necessarily a bad strategy to push processing to the client. But you must take the impact on perceived speed into account during testing. Remember: Measure performance from the user's perspective, not the server's.

4. Correlate performance issues to underlying problems

Let's say you've built a robust testing environment and have a solid and thorough understanding of performance from a user perspective. Now what? To be effective, your testing strategy must correlate performance bottlenecks with the code that's creating problems. Otherwise, remediation is very difficult.

In one recent test example, we found that a single page was generating four different REST calls, resulting in 62 database queries, 28 of which were duplicates. That is a massive amount of processing power—and a lot of wasted time and cycles—for a single page. Multiply that over thousands and thousands of page views and you can easily see how the environment could be improved by optimizing these calls.

Our team solved the problem in two ways. First, it used a caching system to ensure that the duplicate database queries didn't result in a fresh call to the database. Second, work was done to optimize the remaining queries and improve their efficiency. This is of course all part of the best practices for any application design project. However, the team could only isolate the problems by testing the system under a realistic workload and tracing the bottlenecks back to the code.

5. Make performance testing part of agile development

All too often, performance testing has been isolated in its own tower and left until the end of a development project. At that point it's probably too late to fix issues easily. Any problems discovered are likely to significantly delay your project by throwing development into firefighting mode.

To avoid this problem, make performance testing part of the agile development process. That way you can find problems and fix them quickly.

Specifically, testing must be integrated into development, which means performance engineering should be represented in scrum meetings on a daily basis and tasked with the responsibilities of measuring and tracking performance as the code is developed, within the same development cycle. Leaving testing for the end of the development process is too late.

Think outside the box

Put it all together, and the key to realistic testing is to take a broad view of performance. Do you know what your users care about? Have you thought about the infrastructure you will need for realistic tests? Do you know how to trace problems back to their source? Do you have a plan for collaborating with your developers? Think big, and your testing problems will get a lot smaller.

Keep learning

Read more articles about: App Dev & TestingTesting