Scene from

Is record and playback test automation harmful?

Test automation can be all you dreamed of, but it can also easily turn into a waste of time, energy, and money. Deciding what gets automated and choosing the right tools is crucial.

Many managers are tempted by the allure of record and playback testing, but don't make these tools the focus of your test automation efforts unless you have an uncommonly static user interface and codebase. The tests these tools produce are often outdated in a few weeks, depending on how frequently you update your code. Teams try to make these tools useful by either re-recording often or recording and editing. Both methods have major problems.

Here are the pros and cons of record and playback-style testing.

A guide to automation testing frameworks—and how to build yours

Record and playback: What is it?

Many companies start out with a plan to train their existing QA team to execute and develop automated test cases. The organization might decide it wants a tool with a record and playback option. These tools let the tester hit record and manually go through the real-user actions of a pre-scripted test case. When the tester is done, the tool will have created a script that can automatically run those exact same actions automatically.

That might sound like a huge time-saver, but even testers experienced in using these tests have serious issues with them.

The problems with record and playback

The issue is that application code, especially UI code, can change frequently. In that situation, record and playback tests break often, negating any time savings over just going with manual testing.

Unless your developers make changes only rarely, using record and playback will leave you with a pile of fragile test automation scripts. These tests also stress your network resources as they grow because record and playback tools often duplicate lines of code, images, and objects each time they execute.

When this duplication generates a lot of extra code, it also makes it harder to debug failing scripts. In other words, the tool records more than simply the steps you take. It’s difficult to understand what part of the code belongs to the test steps and what is the extra data the tool collects.

So what are they good for?

“Record and playback tools are good for getting one’s feet wet in test automation, but the files created are extremely large, they execute slowly, and get slower over time, if they work at all,” says Hector Diaz de Leon, a test automation expert and development engineer. It's not something you should be using for tests you expect to last for the long haul.

Record and playback as a teaching tool

Despite their fragility in certain contexts, record and playback tests are a convenient way to learn test automation, and an excellent resource for teaching test automation development. But you should only use them for two purposes:

  • Education and training
  • Extremely simple web-based application functionality, where the code base never changes

Record and playback tools are useful in training because they provide automated test exposure to traditionally manual QA test teams. Don't know what code to write for a particular action? These tools can tell you, and you don't have to memorize every method right away. But record and replay should not be your only source of truth when teaching automated test code. Always teach languages through the documentation, because some tools might even use bad practices in certain contexts. 

Using record and playback, QA testers can also experiment with various application features to build tests and view the underlying code built by the tool. Having the team complete simple record and playback tests provides information about the application's programming language and how it can be manipulated for testing purposes. Testers also generate real code examples that can be used later on as automated test development moves toward scripting, rather than recording.

But you have to be careful. Just because you might start to understand the tests and make edits to them doesn't mean they aren't still fragile.

The record, then edit approach: Don’t count on it

It's tempting to believe you can use the record and playback function to create tests and then edit the script, but that approach is complex and overly time-consuming. Editing record and playback scripts tends to create frustration among team members. Scripting from test module example code is far simpler and more effective than attempting to edit recorded test files.

Record and playback scripts can be maintained, but it's time-consuming. For example, the time QA spends manually testing an application feature from beginning to end is faster than analyzing the script failure point and repairing the lines of code. It’s difficult to locate and fix script failures in these types of files because they append on each execution and collect other data not relevant to the test.

Your test team will spin its wheels and end up frustrated. When script editing is either too difficult or too time-consuming, testers will just execute the automated test and perform manual workarounds to get past known script errors. Testers should not spend time staring at automated tests while they execute so they can jump in at the appropriate time and intervene. It's a waste of business time and resources.

Automated testing should provide additional testing coverage and be able to execute repeatedly without manual intervention. Having to manually intervene defeats the purpose of automated testing. The tester should be focused on feature or story testing, while automated tests check the existing code base. Testers can’t do their jobs if they have to babysit automated test executions.

The “just re-record it” approach

Another pitfall teams fall into is trying to just re-record an automated script each time it needs maintenance. I'd recommend this only if your QA team members want to spend their entire work lives re-recording scripts.

The change may be simply an object ID change or relocation. Re-recording for these minor changes is not practical. If these small changes keep breaking the tests, then you probably need to drop some of your automated testing efforts for the UI altogether.

Some tools actually encourage re-recording for simplicity's sake. For example, image capture automation tools suggest you simply re-record the script, rather than attempt to edit the file. What this tells me is that the file is difficult to debug or edit in order to maintain automated tests. That's not a selling point for a tool.

If your applications are simple and small, there might be an argument for the re-record approach. I’ve never worked with any of those, but perhaps they exist. If it were easy to re-record scripts and make them run reliably, automated testing would be far more widespread than it is.

Training over tools

In order to develop a sound test automation program, consider investing in hands-on, at-work training that is taught using the scripting language(s) of your team's tools. Train your existing QA team to script the automated tests directly. Provide them with a willing and able developer support resource. It’s a no-brainer business investment: improve the skills of your employees and their ability to ensure that test automation suites are maintainable and useful, and truly improve application testing coverage and efficiency.

There are some free resources to get your team started over at TechBeacon Learn's continuous testing track. Learn which tests to automate and the components for building a test automation framework.

A guide to automation testing frameworks—and how to build yours

Image source: Flickr

Topics: Quality