You are here

How to build a performance testing pipeline

public://pictures/kuznetcova.jpeg
Viktoriia Kuznetcova, Performance Engineer and Test Automation Engineer, Breakpoint

If your company has adopted DevOps, you'll want to do performance testing as a part of your continuous integration/continuous delivery (CI/CD) release train.

Creating your first automated performance testing pipeline can seem overwhelming at first. I know the feeling, since I've hit quite a few bumps over the years, automating various stages of the pipeline for different web applications. 

I've learned the hard way how to navigate the process of setting up an automated performance testing pipeline. Follow the steps below and you won't have to repeat my mistakes.

[ Get Report: Buyer’s Guide to Software Test Automation Tools ]

1. Set up the test environment

Do you use an on-premises test environment that's managed by someone else and is likely shared between projects and test teams, or do you have the option to provision a new test environment each time you need it? Your answer will determine many of the choices you'll need to make—and will drive your test environment setup.

For an on-premises environment, you need to ensure that it is in the same state each time you run a test. Think about it this way: Is there anything in the test environment that can influence the results of your test? If the answer is yes, you need to find a way to minimize that effect.

If you can set up your test environment in the cloud, do so, and make sure to use whatever tools developers are using for their deployments. However, remember to adjust the process to get a proper environment for performance testing.

At the very least, you need to plug in your own database server and scale the servers up until they are fit for purpose. You don't necessarily need production scale for this, but you need an environment big enough to give you meaningful results.

Also, beware of noisy neighbors; in the cloud you're usually sharing a physical server with others, so the smaller your virtual machine (VM), the greater the effect your neighbors can have on the performance of your VM.

[ Jess Ingrassellino: How to overcome the challenges of testing in the cloud ]

2. Set up your test data

Next, sort your test data into three categories:

Reusable test data

This is the data that your test does not affect in any way. All you need to do here is prepare it once, and then ensure no one messes around with it.

Non-reusable data that isn't destroyed when you redeploy the application

This is the data that your test is modifying, deleting, or creating when it runs its course. In the cloud you can deal with that by restoring your database to a snapshot before each test.

For on-premises you need to find a way to either return a fixed set of test data to its initial state or to dynamically select a subset of test data to work with from a big database before each test.

Non-reusable test data that is destroyed when an application is redeployed

For example, you might need to import users and/or re-create user activity logs to test your application. Depending on your situation, you might not have this problem at all. But if you do, you need to introduce a process to generate such test data to a production-like volume before you start the testing.

[ Webinar: How to Fit Security Into Your Software Lifecycle With Automation and Integration ]

3. Set up a load generation tool

This is the easiest part. On premises you probably have a load generator that's always online. In the cloud you can have a preconfigured VM that you turn off and on as needed, or you can dynamically spin up a VM from an image where everything is already installed and ready for use.

You might also need to take care of firewall rules and load balancers in the cloud. And don’t forget to push or pull the latest test scripts to the load generator before the test. Yes, this requires that your test scripts be stored in a version control system.

4. Set up monitoring tools

Remember, this is an automated pipeline, so no ad hoc monitoring; you need to gather all the information you might possibly need for later analysis ahead of time. Keep that in mind, and pick a set of monitoring tools that allow you to gather everything in a centralized storage.

Decide on a monitoring server, and send everything there—resource utilization metrics, JMeter results, application logs, GC logs, AWR reports, what have you. Anything that you might need later, you keep.

[ Bas Dijkstra: 5 effective and powerful ways to test like tech giants ]

5. Run the tests

You need to get two things right here. Remember to monitor for error rates for automatic test invalidation, and stop the test if there are too many errors. And make sure that your test results are stable before you let anyone use your pipeline.

Run the pipeline a few times on the same release over a few days, and measure the variation in test results. It should be minimal. If it is larger than you can accept, figure out where the variation comes from and fix it.

Otherwise it's business as usual. 

6. Gather and analyze test results

Now it's time to deal with the results. First, gather and store raw test results alongside the test, same as monitoring—you will thank me later, when the test fails, and you need to investigate it. Second, you must define programmable pass/fail criteria that make sense for your specific project.

You can get basic pass/fail criteria implementation from something like a Jenkins Jmeter plugin. For more complicated functionality, you will need to write some sort of script to go through the data you gathered and calculate the result for your criteria. I recommend using R for data analysis, but any scripting language such as Groovy or JavaScript would suffice for most situations.

7. Display test results

It doesn't matter how awesome your automated performance testing pipeline is, if no one is looking at the test results. To make it shine, you need to present test results in a way that is useful for your target audience, be it fellow performance engineers, developers, or management.

It is good to have a few levels of reporting, from the simple pass/fail flag for daily standups, to a nice dashboard with a high-level picture of applications' performance, to detailed reports and logs for that specific test run.

8. Don't forget to clean up

In the cloud, cleaning up is easy: You just turn off or destroy your test environment, and you are done.

But on premises, especially in a shared environment, you must return everything to the state it was in before you started. The goal is to avoid spoiling testing for someone else.

Not easy, but worth it

That's how to start your automated performance testing pipeline, in a nutshell. I won't kid you: Doing so will require a substantial investment of time and effort. But if your company is moving toward monthly, weekly, or daily releases, it will be well worth the effort.

And keep this in mind: Even if you're not setting up a fully automated pipeline, you can still use the tips above to streamline classic performance testing. The goal is to speed up any kind of performance testing, automated or manual. 

To learn more, including about specific tools helpful for each step of the way and more tips and tricks, drop in on my session, "Building Performance Testing Pipeline for CI/CD Projects," at PerfGuild, the online performance testing conference, on April 8-9. Can't make it to the live talk? Registered participants can watch video recordings of all talks, including the Q&A sessions that follow, after the conference. 

[ Get Report: How to Get the Most From Your Application Security Testing Budget ]