Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Modernize your performance testing: 6 tips for better apps

public://pictures/leandromelendez.jpg
Leandro Melendez Manager of Performance Testing, Qualitest
 

The world of application development keeps evolving at breakneck speed with respect to processes, delivery, and methodologies. But it's not just developers who are struggling to keep up with constantly changing software: This evolution is forcing test engineers to modernize their performance testing practices—and to let go of old methodologies that can't keep up.

Here are tips that will help your team implement modern performance testing practices—and drop outdated processes that drag down your results.

1. Modern performance testing goes beyond load testing

Creating load automation, executing load scenarios, and testing a system's performance by slamming it with load have been what organizations typically do when they have to embrace performance testing.

This practice has caused performance testing and load testing to be wrongly seen as interchangeable terms. Even performance testing professionals often switch these up, which perpetuates the bad old tradition of testing performance by running only load automations and load tests.

Today, load testing and load automations are just some of the actions you need to exercise in your performance testing practice. But they should be one of the last steps you execute and, in some situations, you shouldn't even do them at all.

Performance testing encompasses myriad practices and actions that must be taken as a whole. Load tests have their place, but first you need to perform other tasks, described below.

2. Think early about performance

The traditional approach to performance testing doesn't address performance assurance, which implies all the possible tasks you may need to perform to ensure the best performance.

The best processes to assure good performance require tasks to be executed even before writing the first line of code. Some of those tasks create mechanisms in the environments, including pipelines, monitoring, and instrumentation.

Old strategies focus on automation and front-end load tests until the very last steps in the software development lifecycle, which limits the time available to complete the usual load testing. This practice weakens performance assurance, leaving little time for corrections and resulting in massive costs when problems are detected. If rework is needed or the team must release faulty software into production, there will be a significant impact.

Think early about performance, including not only infrastructure, but also all performance implications from the requirements gathering stage to building epics, features, and tasks. Everything you implement around performance should define metrics that must pass before you mark anything as done.

Teams must define measurements such as response time on a single thread, concurrent response, the number of database connections/reads, maximum bandwidth consumed, and so on. With this performance focus, your teams—including your developers—will have performance etiquette in mind before, during, and after creating software. 

3. Your developers are your first line of defense

Contrary to the old ways of thinking about the software lifecycle and QA practices, where developers were disconnected from QA efforts related to the code they created, your developers must be wholly involved in QA and performance assurance from the beginning.

The old mindset made it difficult to identify defects generated in the code and allowed those defects to reach and at times pass QA, acceptance, and performance tests and go into production. And the cost of fixing defects that make it into production is much higher than if you catch them earlier.

Modern practices suggest implementing rules for what developers deliver. One possibility is implementing telemetry, instrumentation, unit tests, and timers inside the application code and storing the performance measurements. Those actions help trigger, detect, and measure performance issues, even at the development stage, and make it easier to identify and report any problem, even before you check in any code.

4. Measure and observe everything

It helps to have application performance measurements at every moment in the software development process. As soon as developers write code, the team should have performance measurements, which should continue until production.

Having these measurements is a drastic change from old practices, where often there was no way to measure the performance of an application and its components. Usually, no mechanisms were in place until the software reached a test environment or even the production stage. In some cases, there weren't even any metrics in production.

Even so, performance metrics in the code are not enough. Teams must complement these with application performance management (APM) systems. An evolution of the old application performance monitoring systems, these systems provide lighter agents and a myriad of new functions to monitor and manage performance thresholds.

Teams must implement APM agents and instrumentation in every environment that the application passes through in the software lifecycle. As code passes from development environments to staging, testing, branches, and so on, your team will be able to observe and measure performance metrics and any outstanding deviation in a continuous manner.

5. Involve your developers as you create test automations

Another outdated practice is trying to automate processes for testing at the end, just before releasing code into production. This issue affects both performance automation and test automation in general. Traditionally, performance testers and QA teams often had to reverse engineer code, functions, and front ends to automate the testing of such code, which had considerable impacts on every task.

There were multiple occasions when the testers could not automate the processes because the software bits were sealed, compiled, or inaccessible. The software would go completely untested on those occasions, or testers would have had to test manually.

To avoid this, developers creating the code must consider the nature of the test automations used and ensure that the code can be easily triggered from those automations. They can implement calling methods and create test backdoors, test-oriented APIs, and any mechanism that allows for automated testing.

These mechanisms have multiple benefits. On the one hand, creating needed test automation for general QA and performance measurements, including load, will be easier. On the other hand, this will help the team integrate these tests and validations into continuous and automated processes that will receive those results as flags for letting the code move into production.

6. Schedule, run, measure, validate, repeat

In traditional practice, testing professionals used to think of performance testing just as a single load test to be executed once before launching, or at most every year if there were any changes. But these days, your solution is expected to change frequently. Performance test results become obsolete the moment you include new code or after sprint releases, making a just-once performance test a useless practice.

If you follow the best practices above, your team will efficiently and continuously measure performance at every step of the software-development lifecycle and increase the capabilities of integrating every performance automation and threshold into any platform.

Your tests will be light and highly automatable so that your team can schedule them or configure them to be triggered by code check-ins, scheduled jobs, or external events. As the automations are triggered, your teams will receive performance measurements continuously, allowing you to implement thresholds that will automatically stop new code or let it reach production.

Finally, your automation will be repeatable even in production, allowing the tests to run in any tier and environment of the application, together with thresholds for alerting. When your team implements all of these thresholds, they will allow for notifications and corrective triggers. In this way you will avoid having to watch everything at all times and being overloaded with uneventful measurements.

Think beyond the load 

Following the same old practices for performance testing and assurance can be unproductive or even harmful to your application, so move your focus away from just automated load tests. Think early about your performance needs and risks. Involve the developers in performance-enabling tasks.

Measure performance everywhere in your code and in every environment. Make your solution easy to automate. And allow your automations to be triggered constantly and whenever changes happen.

If you do these things, you will be several steps ahead in modernizing your performance assurance efforts.

Want to know more? Drop into my talk, "Performance—Really What Is It These Days? Why Does It Matter?" on October 7, 2021, during STARWEST. Both in-person and virtual registration options are available. The conference runs October 3-8, 2021. You can also catch me on the PerfBytes Podcasting channel, where I host PerfBytes Español edition, and on my YouTube channel, Señor Performo in English.

Keep learning

Read more articles about: App Dev & TestingTesting