Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Fast fixes for slow tests: How to unclog your CI pipeline

Matthew Heusser Managing Consultant, Excelon Development

You may be familiar with one of these software testing scenarios.

After you have watched the tests get slower week after week for a year, suddenly someone realizes that testing is constraining delivery and demands that something must be done.

Congratulations. You are something.

Or maybe you have just started a test tooling adventure, the tests are running quickly, and you would like to keep it that way.

Either way, we thought you could use a little help.

TechBeacon asked nine experts in software testing for their advice on how to speed up the testing process, focusing on how to speed up a continuous integration (CI) pipeline bogged down by tests. Here's what they had to say.

Remove sleeps and triage tests

Dave McNulla, staff software engineer, Teradata:

I hate waiting for test results! If tests run too slowly, I look to run in parallel (or expand parallel execution), which requires the tests to be completely independent. Adding more systems, whether cloud or on-premises hardware, costs money, but so does making developers wait for results. That involves labor costs, opportunity costs, and the cost of delay to the business. It is a triple threat.

To speed up tests, I often look for "sleeps" in test-ware. They waste time, and they are indicative of a flaky test. The test author does not really know what to wait for and ends up waiting for an extended period of time. If the test is important, there are ways to poll more often so the test resumes sooner.

After that, I look for low-value tests to run less often. Value is calculated by likelihood of failure paired with the priority of a bug found by a failure. If a test never fails or only finds bugs of low priority (or redundant with another test), we can run it less often, perhaps overnight or weekly.

Push tests down the pyramid

Tim Ottinger, senior technical consultant, Industrial Logic:

One solution to slow tests is to trade integration for speed. If the system under test calls for a database query, replace the query with a mock that returns a prefab dataset instead. This saves the network and database time and should be good enough as long as your dataset meets the criteria you expect from your database query. Try moving slow tests down Cohn's test pyramid. If you are doing a system test, could it be an integration test? Could the integration test be replaced by a few good microtests? The lower-level the test, the faster it typically is.

Also, don't use Gherkin only to drive the GUI. We find that Gherkin-based tests work great as an alternative user interface to an application. It will call the same functions the real GUI would call yet is much faster. 

Also, realize that if your tests require you to stand up a full application with all its supporting services, then you're going to be subject to long startup/shutdown times. If you can, avoid that sort of full environment, or try to do it just once for the full test run, not on every test case.

The faster your tests are, the more often they'll be run, and that's a key performance metric for developers. The tests they don't run do not catch any issues, and the less often they run them, the more confusing it is to understand what is wrong. Fast tests are run frequently, and this helps developers work in smaller, safer steps.

Finally, consider in-memory databases and small datasets. This one simple trick has paid significant dividends.

Shift together

Don Jackson, chief technologist, application delivery management, Micro Focus:

Some people state that the solution for speeding up testing is to shift it to the left, toward development of the application. Others say it is shifting to the right. But what needs to occur is shifting together. Organizations need to automate their testing and incorporate it into their CI/CD pipelines. That automation needs to have contributions from all three personas that have a vested interest in test automation: the traditional QA automation engineer, the domain expert/business analyst (shift right), and the developer (shift left). 

When constructing the tests, organizations need to focus on the business-critical process flows/paths to mitigate the business risk and ensure that checkpoints are not run more than once for the same object that is in the same state. Furthermore, the pipeline needs to leverage the concepts of distributed testing, which allows the suite of tests to be executed on multiple nodes, concurrently. Lastly, exploratory testing should occur during the execution of the pipeline by the team, adding the human/unpredictable element into the testing and finding those oddball defects prior to release. 

Create faster feedback loops

Marisa Shumway, vice president of product marketing, CloudBees:

Feature flags can improve your testing process in several ways. For example, QA can use the flags to turn off behavior to compare it to previous releases. Without this, getting a developer to change the code and create a new build could cause a delay of hours or days, and, realistically, the test just might never run. With the flag override view, QA can shorten the feedback cycle by having complete freedom to toggle flags in the environment with the click of a button. Now QA is empowered to work independently, without asking developers to break their focus or waste precious business cycles. Once the flag exists, if there is a problem, a rollback is a simple configuration change. This eliminates the need for retest/rebuild, making the rollback process take seconds instead of hours. That reduces the risk exposure, which allows the team to proceed with less delay.

Time tests for quick performance indicators

Philip Ross, senior software engineer, Synopsys Software Integrity Group:

Once you’ve identified slow tests, review each, and if there’s no obvious bottleneck, run the test with a profiler to see the breakdown of where the time is being spent. With that information, hopefully you can see any hot spots and can then work on fixing them.

Consider checking your configuration and seeing if there are developer-like performance settings in the test environment. Some of these developer-like settings may have traded runtime performance for compile-time performance or something else, which you may not want on a static CI test run.

Also, check to see if there are hard-coded constants to set per environment. For instance, a sleep for retrying HTTP requests should be set to 0 in a test environment where you’re not hitting a real endpoint; you can fix that by moving the sleep amount into your configuration and setting it to 0 in the test environment.

Test should check one thing well

Michael Larsen, senior automation engineer and Socialtext show producer, The Testing Show:

Have a look at what your tests are actually doing. A good rule of thumb is that any test should do exactly one thing unless otherwise warranted. This atomic nature is the key to creating unit tests, and it's just as important for other integration or end-to-end style tests. It's common that slow tests are overloaded—they try to do too many things. By paring down the steps each test has to perform, the overall time for each test goes down, and often there can be a synergistic effect. You may find that running multiple single-purpose tests is quicker than running one overloaded test.

Run tests in parallel

Andrew Knight, lead software engineer in test, PrecisionLender:

One of the best ways to speed up tests is to run them in parallel. Attempting to shave one second off this test here and three seconds off that test there doesn't have the same impact as running two tests concurrently instead of serially. Most test frameworks already have built-in parallel capabilities. Enabling parallel execution is frequently just a setting or a command-line option. However, the real challenge with parallel testing is avoiding collisions. If tests share any resources, such as users, services, or databases, then there is a chance that they could access shared resources at the same time, thereby “colliding.” For example, in a store app, if one test tries to retrieve an order right after another test deletes it, then the first test will fail.

The best way to avoid collisions is to make sure tests do not modify any shared data. If a test needs to change data for testing purposes, then it should create new data for it to use exclusively. Going back to the store app example, tests should always create new orders instead of reusing existing ones.

Make sure CI is accelerating things, not creating drag

Lalitkumar Bhamare, senior software test engineer, XING; CEO, co-founder, and editor, Tea-time with Testers magazine

I see a lot of organizations creating more problems with CI than they solve. In my experience, CI-related things that drag organizations behind are poorly implemented CI jobs and not giving enough thought to what amount of automation should be done at what layer. In addition, poor infrastructure can lead to flakiness, while ever-failing automated checks that block deployment create an ongoing mystery to solve. That can lead to engineering time and productivity wasted on "fixing" the CI pipeline, plus firefighting caused by environment/infrastructure problems that do not reflect actual production failures.

If that sounds like you, one quick fix I can offer is building CI jobs so they can only run selective automated checks—for example, only for the areas affected by a particular code change. That will speed up the test run and prevent false failures, and it is cheap. Another option is to stop kicking off builds for nonproduction code changes, such as a change to "helper" scripts or changes in the automated checks.

See the whole board

James Bach, consulting software tester, Satisfice:

The first thing I do is to center myself. I remind myself that I might, in fact, be a bottleneck. I then remind myself that being a bottleneck is not necessarily a bad thing. (Security is a bottleneck in airports. Is anyone saying, “Let’s get rid of security because I want to save 15 minutes”?). I reflect on how I may not be good enough at my job, however, and that the perception that my work is too slow could be valid.

Once I am centered, I want to understand the perception. What, specifically, seems slow? To whom does it seem slow? Slow compared to what?

This is a process of entering the system that I need to deal with.

As I learn what the concern is all about, I may find that I agree. But a common problem is that people judging the speed of testing haven’t thought through the value of the process. Testing is like insurance, in the sense that you don’t get insurance because you hope to profit. You get insurance as a hedge against loss. Testing has a cost in return for that investment and time—we gain some probability of discovering important and evasive problems. We do our best to match the investment to the potential risk.

Lighten your load to go faster

When it comes to the test process, there are the things that need to run all the time (the CI pipeline), the usual process of testing, and the occasional processes, such as a security audit, that can slow everything down when they are kicked out.

Our experts' advice boils down to a few things: Remove things from the pipeline, speed up the pipeline, improve the general process, or reduce the impact of the special/odd things, such as security and performance testing. It might be possible to bring those into the continuous integration pipeline or to enable developers to do that locally before the pipeline runs.

In any event, if you want to go faster, you can lighten your load or build up your muscles. If you routinely use sleep statements or use high-level tests, congratulations! You might have some easy, fast fixes.

Everyone else may have to do some flexing and heavy lifting.

Special thanks to Lee Hawkins, whose suggestions to run in parallel, use mocking, test at the right level, remove waits, and do one thing well reinforced the ideas of the contributors. 

Keep learning

Read more articles about: App Dev & TestingTesting