Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to get the most from your test automation suite

public://webform/writeforus/profile-pictures/bas.jpg
Bas Dijkstra Test automation speaker and writer
VW Beetle assembly line
 

For many software-producing organizations, test automation is—or is quickly becoming—a cornerstone of their software development processes. Development teams are increasingly relying on automated checks to ensure that software that needs to be delivered in ever shorter iterations conforms to a quality level that meets or exceeds business and customer expectations.

As they get more heavily involved in test automation, these organizations are starting to realize that the activity of creating and executing automated checks is an investment that they should regard as a "first class citizen" software development project, and not something they just do on the side.

However, a lot of organizations do not get the maximum possible return on their investment in test automation. To get the most from your test automation suite, you must understand the common failures. Here, then, are some of the most important causes of diminished returns on your test automation investment. 

All you have is a hammer...

Any test automation engineer worth his paycheck should try and squeeze as much use as possible out of the toolset she is using to create automated checks. However, there is a danger of going over the edge while doing so. Trying to force a tool to do things that it is unfit to do just because the test you want to execute requires it can lead to suboptimal results, to put it mildly. Here are some examples of this phenomenon that I have seen on multiple occasions:

Trying to handle dialogs using Selenium WebDriver 

A prime example of this is file upload and download dialogs. While you can create a method that checks the current browser and operating system type on which you're running your test and handle such a dialog accordingly, but it'll likely frustrate you to no end. The WebDriver API itself does not provide methods to handle these dialogs properly, meaning you must rely either on simulating keystrokes, or on including another library that does handle these dialogs (AutoIt, for example).

Instead, consider bypassing these dialogs altogether, and using a library operating at the HTTP level (such as the Apache HttpClient library for Java) to send or receive a file. That will result in a stable, browser- and operating system-independent solution, with much fewer lines of code. Also, it's not likely you're actually verifying the workings of these dialogs anyway.

Trying to perform checks beyond your tool's scope

I've seen the question of how to automate checks at the API level using Selenium numerous times. But Selenium is a browser automation tool that does not operate on the API/web service level, so it would make no sense to try and force it to do so, right?

There is no one-stop solution for this phenomenon, I'm afraid. It does pay, however, to take a step back from what you're doing when trying to automate a check that's giving you headaches. Asking yourself, "Are this tool and the way I'm using it fit for the purpose of my test?" on a regular basis will help you stay focused and ensure that your approach is as efficient as possible. Even better, try asking a colleague for a code review to gain fresh insights and feedback.

Your engineers aren't aware of a tool's features

While some engineers are trying to do too much with their tools, the opposite occurs as well. Not getting to know a tool and all of its features can lead to a less efficient implementation and a lower return on investment. I have been guilty of this as much as the next person, and I'm sure I'll fall for this trap again in the future, if only because it's hard to learn to know what you don't know!

For example, while creating simulations for a dependency in a test environment that required persistent data storage, I wrote several snippets of Java code that stored data in and retrieved it from a MySQL database. I didn't know that the simulation tool I used offered a plugin that handled this for me, eliminating the need to write code. Because I didn't know that this plugin existed, I created a larger, less efficient simulation code base than was needed to complete the task.

To avoid this mistake, I recommend getting to know your tools as thoroughly as possible. Get training, read blogs, practice, and have your work reviewed on a regular basis. There are always new things to learn, even if you've been working with a tool for years.

You don't run automated tests often enough

Your automated tests won't start to pay back on the investment you made to create them until you start running them frequently. So, instead of just running your tests once every few weeks, such as at the end of a sprint to see if no regression has occurred, why not run them every night? Or every hour? Or at every commit from your developers? Running automated tests more frequently has two benefits:

  • You get feedback on overall application quality more often, making it easier and cheaper to detect, analyze and fix possible defects.
  • You receive confirmation that your tests are running in a smooth and stable manner, or perhaps that they need some work to make them do so.

Of course, there is an upper limit to the number of times that you can run a test suit. With longer, end-to-end tests, which can take minutes -- or hours -- to run, you might not want to run them on every developer commit, simply because this would slow down the development feedback loop unnecessarily.

You didn't create automated tests with reliability and maintainability in mind

As with your production code, you should always keep reliability and maintainability in mind when creating automated tests. Otherwise, the time needed to:

  • Update your tests to reflect your current application state.
  • Analyze failing tests, only to find out that it was your test that was not up to date or otherwise failed,

will seriously reduce the time gained by running the test automatically in the first place. It might even make your automation a maintenance burden, rather than the development accelerator it should be.

Learn the craft

The central point of not getting the full return on investment from test automation is that test automation is a craft, and its implementation requires specific expertise, despite the claims by some automation tool vendors that with their products, everybody can create, maintain and execute automated tests.

Only skilled professionals who take test automation seriously adopt and implement best practices will see their return on test automation investments maximized.

Image credit: Flickr

Keep learning

Read more articles about: App Dev & TestingTesting