Micro Focus is now part of OpenText. Learn more >

You are here

You are here

5 ways to simplify your automated test cases

public://webform/writeforus/profile-pictures/paulmerrilldatastrategiesintestingheadshot.jpg
Paul Merrill CEO, Test Automation Consultant, Beaufort Fairmont
 

Maintaining test automation can take a lot of time. So can understanding reporting for your test automation. Fortunately, you can greatly speed those things up.

A big part of my consulting practice is helping clients with test automation. And with client after client, I see testers, test automation engineers, and developers creating test cases that are long, that are difficult to work with, and that don’t have a clear purpose. If their test cases were more streamlined and focused, the teams that use them would save a lot of time.

Here are five pointers for improving test cases, garnered from my years of working with clients who are implementing test automation.

1. Decrease your scope

Testers tend to be holistic. We like completeness. We think of use cases broadly, from one end of the system to the other. We want to know all the breadth and depth of our systems under test. That’s a great thing … sometimes.

The scope of a test case should depend on the intent of the test case. In exploratory testing (a term coined by Cem Kaner in 1984, and a concept expanded by Elisabeth Hendrickson in her book Explore It!), you define a charter for a session of hands-on testing. That charter may be limited to a feature or set of features you want to learn about in the system under test. Because the testing is exploratory, you’d do several types of experiments with the features: long sequences of experiments, different sequences of actions, and permutations of actions, all for the purpose of exploring the application and finding issues. 

Test automation, however, does not explore. A major reason to create test automation is to provide a mechanism to alert you when the system under test (SUT) is doing something different from what you think it should. Long, meandering test cases that were recorded or scripted when a tester was in the exploratory mindset may inform our test automation, but they do not direct our scripts.

So determine what you want test automation for, and then narrow your test’s scope to that part of the feature.

For instance, let’s say you have a test script that is supposed to tell you whether a change of password in the SUT’s user profile worked. This script logs in, goes to the profile section of the site, verifies that the profile image is correct, creates a password, changes that password, tries to change it again, logs out and logs back in, changes the password a third time, tries some passwords that shouldn’t be accepted, and sees if the email address is correct.

That is a busy script! It’s a great sequence of events for exploring functionality. But it goes well beyond the scope of the test we should be writing. Verifying that the change of password works doesn’t require looking at the profile picture or verifying the email address—that’s all noise. You might want to automate them as well, but it’s better to split them up to fit separate test cases.

You could, for example, write positive and negative test cases: one to change the password and verify the change, and one to verify that incorrect passwords are rejected. Some of the test cases could be data-driven to avoid duplicate code. The important thing is that each test is specific to its purpose, has limited scope, and has less code to execute and maintain over time.

When you’re thinking about how to limit scope, imagine the people reading your tests later. Will they be able to easily understand why the test case exists? If they can’t understand the intent behind the test, they can’t maintain that intent.

2. Fail for one, and only one, reason

I believe most test cases should fail for one and only one reason. If your “Valid user logs in” test case only verifies that a valid user has logged in, then you can quickly start working through a problem flagged in the test automation report. If, on the other hand, the “Valid user logs in” test case could also fail because it verifies the page title, the copyright on the bottom of the page, and the company logo in the header, you have a lot more troubleshooting to do. Which verification point failed? Why did it fail? Did more than one fail?

In general, keep test cases to one verification point or tightly grouped verification points that all work together to tell you whether a feature works as expected.

Similarly, don’t build verification points into the navigation utilities in your framework. You don’t want failures because you have verification of navigation in a test that’s not checking navigation. These are runtime failures, not indications of whether this test succeeded in using its intended functionality.

3. Identify responsibility (and hold to it)

Similar to limiting scope, asking, “What is the responsibility of this test?” can be very helpful. Uses of “and” and “or” may indicate that the test case has more than one responsibility. If you can’t state the responsibility of the test case easily in one sentence, your purpose in writing the test case may not be clear. Just remember: As with writing code or formalizing and communicating a concept, it is much more difficult to write a clear, concise test case than it is to write a long, meandering one.

4. Ask, “What is the simplest thing that could possibly work?”

Ward Cunningham and Kent Beck were talking about making progress when programming, but I like to apply this quote of theirs to test automation: “Given what we know right now, what’s the simplest thing that could possibly work?”

Ask yourself: Are you verifying things the simplest way you could? Are you making the test case more complicated than it has to be? Is there an easier way to get the data you need? Is there a simpler way to navigate to the section of the app you need to get to? Can you do the same operation with fewer steps while still making the test clear?

5. Avoid unnecessary dependencies

Avoiding dependencies between test cases is hardly unusual advice, but it remains one of the best ways to simplify automated tests. The problem is that it’s difficult to be aware of dependencies. So make a conscious effort. If you have test cases that can run in only one order or can’t run in parallel, find out why. If you depend on actions that aren’t relevant to your tests, find out why. If you can avoid it, do.

Keep it simple, stupid

I see many teams struggling with the maintenance and upkeep of test automation. One of the most common problems for them is the way they are designing their automated test cases. If you’re in one of those situations, look at your test cases and consider these actions: decrease scope; fail for one reason; identify responsibility; ask, “What's simple?”; and avoid dependencies.

These simple steps have helped me over the years. They can help your team too. 

Keep learning

Read more articles about: App Dev & TestingTesting