Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to choose a functional testing tool: 7 key considerations

public://pictures/Matthew-Heusser-Managing-Consultant-Excelon-Development.jpg
Matthew Heusser Managing Consultant, Excelon Development
 

Type "test tool, how do I pick one?" into Google, and you'll find a wide variety of answers, from open source to best of breed—based on many different assumptions. One result assumes an ideal situation where you need a GUI-based test tool that does not require programming, while another claims that automated tests are code, and a third is more interested in test tooling as examples and documentation that just happens to be executable.

According to Connor Roberts, director of quality and testing for Liquidity Services, organizations sometimes introduce or swap testing tools just because a new manager had experience with a tool at a previous company, or just to save money and look better on the budget sheet.

"Teams that end up using it every day finds themselves with the same frustrations as before, just in a slightly different flavor."
Connor Roberts

The failure: Not asking if the tool actually addresses the problems the team needs to solve.

Tool selection is often based on criteria that are less than optimal. So what are the right ones? Here are seven key things you to consider when choosing a functional testing tool.

1. Defect categories

What are the showstopper bugs that appear, and where do they appear? That's an easy enough question to ask; most teams with a bug tracker can find the answer over a lunch hour. That kind of research might find the majority of bugs in the business logic, the database layer, or the graphical user interface (GUI).

If the more important bugs are in the GUI, then test automation of the business logic through unit tests won't add much value. It certainly won't be the first place to start.

Although this is the first question, it can also be the last. After selecting a tool, return to this question. Review the recent defects that matter, found both in test and in production, and ask if the tool realistically could catch those types of defects. If the answer is "probably not" or worse, then restart the tool selection process.

2. Team fit

The next question is probably: "Who will be doing the automating?" If the automation will be done by programmers or programmer/testers, the tool should probably be a code library or package. Likewise, a group of nontechnical testers will be more comfortable if the tool has a record/playback front end.

Some tools record actions and then create code, or create a visual front end that allows programmers to "drop in" to see the code behind the visualization. These offer the best of both worlds.

The main issue here is that the people expected to learn the tool need to be willing and able to do that and also have the time to do it. If the test process is behind, assigning testers to learn a new tool will add work, slowing the software-delivery process further.

If the regression-test process takes days or weeks to run, automating it, especially from the front end, will slow down testing more, creating a buildup of work until some break-even point. Even after the break-even point, where the tool is no longer slowing the testers down, the old backlog will need to be cleared out.

Of course, if the project is new, or if the company intends to hire a new person to do the test tool work, these objections might not apply. So, do the analysis of how the tool will be added to the team, what it will disrupt, who will do the work and whether those people have the capability and time to do the work.

3. Programming language and development environment

If the tool has a programming language, there are two approaches: use an incredibly powerful high-level language that is easy to learn, such as Ruby, or write in the same language as the production programmers.

Curtis Pettit, most recently a senior tester at Huge Inc., a digital media agency, prefers the second approach.

"I like to meet the devs where they are, in terms of language. It makes it easier to get code reviews and get them running the tests on their machines before commit." 
Curtis Pettit

If the test is written in the same language as the production code and runs during the continuous integration (CI) run, it may be possible to fail the commit and get the programmers to fix the bug. Better yet, the tool could run as a plug-in inside the developer’s integrated development environment (IDE), minimizing the amount of switching the programmers need to do.

If the tool runs outside of the IDE and uses a different programming language, it is unlikely that the programmers will learn the new tool or do the work to support the tool when it reports "failures." 

4. Setup and test-data management process

One of my recent clients purchased a popular test automation tool with a 30-day trial. The company had three different test environments, used shared test-user accounts, and had no easy way to delete orders once they were created. Orders could be canceled, but they continued to appear as canceled orders on the main screen of "today's orders," with the earliest appearing first. Once the system created 10 orders, the new orders no longer appeared on the front page.

The smoke-testing process created three orders, which meant it could run only three times per day.

We made a great deal of progress in the days before the trial expired and spent several thousand dollars purchasing the tool and related training. The problem was, we had no way to clear out the data or create new accounts from the command line—it was all driven through administration screens.

In that case, the "right" tools would probably involve features to create accounts, clear orders, export an account and associated orders as "known good test data," and then re-import it. This would allow tests to start with a known good setup every time.

That would immediately speed up the entire test process, including the humans doing the work.

Another common area for improvement is the ability to create test servers on demand according to a branch or commit. Teams pursuing CI that want end-to-end checking to run as part of the CI generally need to create this anyway.

If the biggest bottleneck in the test process is setup, or if repeatable checking is impossible without automating setup, then the right functional test tool might just be to automate test setup.

5. Version control and CI

Most teams that want to avoid automation delay end up putting the tool run into the CI process. That is, the CI system checks out the code, performs a build, runs the unit tests, and creates an actual server (if needed) and a client, possibly putting the software on a mobile device. Then the CI system kicks off a run of end-to-end tests with the functional tool.

Running the tests under CI creates a new requirement: The tests will need to be versioned with the code. When new branches are created, we will want to create a new branch of the tests. That way, multiple CI pipelines can run at the same time, with multiple definitions of "correct."

That means the tool needs to run from the command line and produce output the CI system can interpret. Or at least it needs to be possible to capture the output and transform it into something the CI system can read. Many CI systems have beautiful dashboards and pie charts that can show results to stakeholders. To use them, the data needs to get out of the tool and into the CI system.

Once the tool runs under CI, the power is in getting it to report failures to the offending programmer. This happens by tracking who made the commit that caused the failure, and then getting the programmer to debug and "green" the test or fix the code. That will be easier to do if the programmers know and support the language and if the tests are stored as plain text. Even when stored in version control, it is hard to tell the differences among files that are in a binary format.

6. Reports

Without meaningful output someone can use, a test tool is a bad investment. Dashboards and charts can be powerful features—unless the team plans to push the results into another system with better reports.

Tracking test runs over time can be a powerful feature as well. Stakeholders at different levels care about different kinds of results. Executives at a high enough level might not even want to know pass/fail rates as much as trends. Mid-level managers want to understand the flow of the process. Technical people will want to drill into details of exactly what went wrong on a given test, watching a video of the execution if possible.

7. Supported platforms and tagging

It seems obvious, but if the test tool cannot run on all the platforms and levels the team supports (web, mobile web, iOS native, Android native, API, unit, and so on) then the team will need to cover that risk in a different way—which will lead to the need for more support.

Most of the platforms will have similar use cases: login, search, display product page, tag, checkout, and so on. One common practice for end-to-end tools is to create a "page object," with the common features expressed as functions. The automated checks call those functions. Page objects are created at runtime, making it possible to reuse a path-to-purchase check on every device; in other words, you can rerun the same test against a different page object.

Some features don’t overlap; they exist only on the full-sized web version of the product.

Tagging is one way of tracking which tests run in which browsers. Another would be to register the page objects, to know if the required type of page object exists for the test.

With tagging in place, it becomes possible to instruct the tool to "run all tests for the edge browser full size," or, for that matter, only front-end tests, only back-end tests, only tests that hit a certain API, only profile tests, and so on.

If the tool has various levels of support by platform and the teams support multiple platforms, then the tool will need to track and run the subset that is supported. The ability to tag tests by feature and rerun all the tests for a feature quickly, perhaps at the desktop right before a check-in, can also be a powerful way to reduce risk while speed up the delivery process.

Then again, the software might be internal, only support one platform, and be coupled in a way that makes tagging not very valuable.

Putting it all together

The triple play for test tooling might just be framework, automation tool and tracking method. Sometimes the framework and the tool are melded together, and there is really only one choice. Other times the framework allows the user to plug in multiple tools.

The answer might be as simple as a single choice for all (JUnit) or a combination of tools designed to address different levels of risk (unit, integration, end-to-end), different platforms, and different skill levels.

So take a look at the actual problems the team is trying to solve. Then find a tool that addresses those risks, works for the skillset the team has, and integrates with the work process and technology stack—and give it a try.  If you can, try a couple of tools and avoid lock-in as long as possible.

After a few months, the tool will be enmeshed into your work process, so you'd better be sure that you're with the one you love, to paraphrase rocker Stephen Stills, because you'll have to love the one you're with.

Keep learning

Read more articles about: App Dev & TestingTesting