Micro Focus is now part of OpenText. Learn more >

You are here

You are here

8 ways to rev up your app testing

public://pictures/Matthew-Heusser-Managing-Consultant-Excelon-Development.jpg
Matthew Heusser Managing Consultant, Excelon Development
Hot rod motor
 

Test tooling often starts out with fast, furious feedback. Over time, programmers add features, testers add tests, and test runs take longer and longer. To keep busy, technical staff work on something else while they wait.

Eventually test results are so slow that results are no longer valid, or, if valid, take archeologists to figure out. All of this could be prevented by faster feedback.

Today's tips are aimed at speeding up the feedback loop by testing less while also spacing out the tests in space and time. This involves running an extended tool suite, whether commercial or open source, that provides more coverage on a slower cadence and the fastest, most important tests continually.

Here's are eight ways to speed up your testing.

1. Move redundant operations out of GUI tests

Say you are testing to see whether users in two different accounts can see each other. Setup requires two accounts, two users, and some fake data. Each of those setup operations might take 30 seconds in the user interface—but could be executed in a half-second from the command line.

Those operations will also be tested elsewhere; they are redundant. So build support functions to run setup quickly, or have a sample test database (more on that below).

Redundant operations are also a problem for features such as search. Once the user interface is tested through one workflow, there isn't really a need to retest the user interface again. Yet there is a need to test for compound search inputs—using and, or, parentheses, wildcards, and so on. Testing this through the GUI causes the kind of delays we want to avoid.

Our next solution is to move those tests to the API layer.

2. Move business logic to API integration tests

Compared to web front ends, microservices and other APIs are solid, unlikely to change, and incredibly fast. Unlike most unit tests, which rarely have a customer impact, API tests can generally be expressed in ways that make sense to customers and tie to user actions. For example: Given a user with this type of data and given this input, expect this result.

With search, this is as simple as having a database with known preloaded data, running search, and expecting specific results. Store the query and expected results in a place the business and technical staff can access them, perhaps at the same level as the GUI tests, and you have an executable specification.

These tests will be able to run without bringing up a browser, typing in a username and password, clicking submit, filling in a search page, or running any JavaScript.

To do that, we need known good data as search results, which leads to the next trick.

3. Create a sample setup database

One insurance company I worked with refreshed the test server periodically with production data. In order to test, we needed to either find users in the test database who had the conditions we were looking for, or set them up ourselves. The first process was generally manual, making test tooling impossible, and the second took time. Even if we could get below the GUI, creating test customers, groups, subgroups, claim orders, and claim results would take a great deal of time to code and add time to each test run.

Instead, I prefer to have utility functions that allow database export and import. If your software has notions such as "account" or "group," it might be helpful to export a subset of the database, perhaps all the customers in an account or group. In addition to a testability feature, this is also a lightweight backup and restore feature, which will be tested automatically, every sprint, by the test process itself.

If the tests create information such as users or orders, it might be possible to export the end result and compare that to another known-result database, comparing the two with the "diff" function. Differencing the entire database after a test run can find hidden defects the testers didn't even know to look for.

Having import/export to known good data also makes it possible to speed up the setup of the system in other ways.

4. Accelerate or eliminate setup and teardown

Strong setup and teardown between tests is a popular idea in testing. With small tests and no "bleed" between them, expected results are easy to determine, making debugging easier.

Many small tests also mean the framework will run setup many times. Most companies I've worked with develop hundreds of tests long term. That means that a two-minute setup and teardown will add hours to a test run.

The simplest way to accelerate setup is to not do it at all, tagging some tests as "read-only." The test framework can run any test that follows a read-only test without a teardown or a setup.

Another way to skip setup is to create users in accounts that are segmented. For example, if the test needs to write data, begin by creating an account called "name_of_test%%date_time_stamp%%." The stamp ensures that the accounts are unique, and as long as one account cannot see another's data, the next test will not require a teardown or new setup.

Even in the event of information bleed, in some cases it may be possible to make the new test data rise to the top—by sorting by created date, for example. In some cases, loading the test database will be faster than doing the setup. In others, the test might not require a test database, just a single user.

The key here is to figure out the minimum number of dependent operations to run in order to get a test to pass. Once that's done, look at the web server and database setup like a performance-testing project. Find the bottleneck, calculate the delay that bottleneck is causing over time to the feedback loop, and consider whether fixing it is warranted.

5. Optimize the client browser's results

Take a look at where the tests run and how fast they run. Older browsers that run continuously can leak memory; virtual machines on a farm can be slow. Look at propagation delay from test web server to test client, along with how fast the software that is serving those clients runs. This also applies to native mobile software and hardware.

For the first, innermost loop of the test process, consider running the browser headless. Headless browsers are controlled entirely by a computer program and do not create a user interface.

The opposite of running a headless browser is to create a video as the test runs. This does not speed up test execution, but it can certainly speed up debugging! The video strategy is more likely to be valuable on the other end of the feedback set—in the overnight run.

6. Eliminate hard waits

Waiting for elements to appear is a classic problem in testing user interfaces. The lazy approach is to code up a hard wait—somewhere between 15 and 30 seconds. When those tests fail, and they will, the tester goes back in and doubles the wait time.

At 30 seconds per wait for a few hundred waits, the total test runtime increases by several hours. Any smaller amount will have less waiting but create the risk that the check will run when the page has not finished rendering.

Most modern tools have a wait_for_element_present_ok method, a wait_for_page_load method, or the capability to build such methods with the existing test framework. Fight for this capability or expect to have test time and false failures explode.

Curtis Petit, a senior tester and consultant, reminded me that wait_for_element_present_ok and similar methods can be built out of code something like this:

while (timeNotExpired) {
If (condition) return true;
else recalcTime;
} return false;

Building the function in something like JavaScript is relatively straightforward. The only question is if the framework can support the search for condition.

7. Take advantage of tagging

When I was at Socialtext, system-level GUI tests were expressed as wiki pages. Each page could be tagged; for example, we had "sunshine tests," "database tests," "search tests," and "profile tests." 

We also had tests by browser, because Internet Explorer and Safari did not support all the features of our tools. Sunshine tests ran in about a half-hour and provided the basic coverage of the entire application. Profile and search also ran quickly and provided deep coverage of a specific feature. It is possible to combine tags, for example to run all search tests for IE, or all profile tests for Firefox.

Describing tests with many tags allows the programmers to write smart queries on what tests to run—resulting in the most appropriate coverage in the minimum time.

8. Run tests on multiple clients in parallel

Last and least, we come to parallelizing tests.

In theory, if the work of test tooling is sliced into 100 small tests that run for between five and ten minutes each, with two minutes of setup, then it is possible to split these over 100 virtual machines and run them all in about 13 minutes total. There are even tools to abstract this work away, so it feels like running a single command.

Rolling parallel tests means using either a public or a private cloud. A private cloud is more likely to allow 10 or 20 concurrent tests. Running 100 concurrent tests on one web server could lead to timing and debugging issues. Tests not designed to run at the same time can also corrupt one another's data, leading to either flaky tests or, more likely, significant rework and revisions to the tests to get them to run in parallel.

If the company can take advantage of the public cloud and the product is a good fit for a test parallelization tool—open source or commercial—then running multiple tests at the same time can really improve system performance. Just don't let it be an enabler for sloppy and slow engineering work.

The complete picture

The general strategy here is to test less, eliminating redundant tests while speeding up elements of the test run. Run the minimum amount of possible tests to get fast feedback—before and after a commit—then a larger set a few times a day, and a full set overnight.

Work toward a world where an individual component, such as an API, can be rolled out independently and defects are tied to a change, and you might just eliminate the need for customer-facing, end-to-end system test automation altogether.

Don't think of that as a goal. Instead, think of it as a direction, moving a step at a time.

Those are just a few of my favorite ways to speed up test execution. What are yours? Please use the comments field below.

Keep learning

Read more articles about: App Dev & TestingTesting