Micro Focus is now part of OpenText. Learn more >

You are here

You are here

3 lessons from the test automation school of hard knocks

public://webform/writeforus/profile-pictures/angie-jones-short.jpg
Angie Jones Senior Director of Developer Relations, Applitools
Knock-out blow
 

The odds of a test automation project succeeding are not favorable. There are many reasons why projects fail, and as much as experienced leaders preach the dos and don'ts of test automation, there is no better teacher than experience.

I've led many successful automation projects, but I'm not going to talk about that. My successes were built upon the hard lessons I learned from failure. Here are a few doozies I learned the hard way.

Don't reinvent the wheel

I'd been doing user interface (UI) automation for several years when I landed my first web services project. I needed to provide a framework that would allow for REST calls, and for some silly reason, it never dawned on me that there might be frameworks that already exist for this.

So I spent a considerable amount of time building my own framework in Java. I created classes to model HTTP requests and responses, wrote my own methods to parse and traverse JSON and XML, created enums for all of the different header types, provided utility methods so that writing the tests was clean and simple, handled connectivity and authentication—the whole nine yards.

I considered it a thing of beauty—until more complicated parsing and payloads became challenging for the automation engineers who had to use this framework. I frequently needed to enhance the design, but I chalked that up to being the nature of the beast. That is, until I interviewed for my next position and bragged about the framework I'd built from scratch. The interviewer frowned. "Why on earth would you do all of that when there are open-source frameworks readily available that do this very thing?" he asked. Gasp.

Ever since that day, I make sure to conduct extensive research for existing products before creating any complicated system from scratch. Fortunately, there are several open-source frameworks that can meet the needs of any given automation project. The common problems have already been solved, and many of them work pretty much right out of the box. This will save you lots of time and energy and allow you to focus on the problems that are specific to your business, rather than to automation in general.

Some of my favorite test automation tools include:

You can also find complete automation frameworks that are open source and include the basic set up you need for your automation project.

Your reuse of automation tools and frameworks need not be limited to open-source projects, however. I find that a lot of internal teams within companies are solving similar problems, yet have no awareness of what each is building. Keep the lines of communication open between your automation engineers across different teams and encourage them to share the tools they're using, where possible.

Don't automate everything

On one of my assignments, we had a three-week regression test period that my team wanted to make as short as possible. Our natural instincts led us to automate any tests that could be automated. If the vast majority of the tests were automated, we thought, we could shorten the regression cycle to just a couple of days. But while this may have shortened the regression cycle, it opened up a whole new can of worms.

What's your end goal?

We automated thousands of tests, only to find that it took several hours for all of them to run. While automation definitely shortened our regression test period, it posed a new challenge to achieving our ultimate goal, which was to eventually include these tests in our continuous integration (CI) process.

The purpose of having automated tests within CI is to provide fast feedback upon production code check-ins, and a multi-hour automation run is definitely not fast feedback. We were able to mitigate some of this by distributing the tests across multiple machines and running them in parallel. However, the feedback loop still wasn't as tight as we wanted. 

That's when we realized that less is more. We didn't need all of these tests to reach our end goal, so we reviewed the thousands of tests we'd created and chose which would be part of CI and which would run less frequently.

Too much noise

If a new code check-in wasn't good, we noticed that several tests failed because of it. It wasn't uncommon for hundreds of tests to fail for a single check-in. This meant that there were hundreds of tests we needed to manually investigate to see if different errors were produced. More times than not, it was a single error manifested in hundreds of tests.

This was way too much noise. Why would we need hundreds of tests that all told us the exact same thing? We had automated too much, and we paid the price.

 

Maintenance

Here's another thing to think about: Every test you automate is one more that you must maintain. Maintenance was not something that we initially foresaw, and even today, many teams don't understand the tradeoff. Automation code is living, breathing code that tests living, breathing code.

What's more, for each change to the application, the associated automated tests also must be adjusted. In my case, our automation engineers ended up spending more time maintaining our tests, leaving less time to write new ones. What we saved in regression testing time, we spent in automation maintenance time.

Make decisions based on reality, not optimism

I once consulted with a team that was just starting its test automation efforts and had gotten themselves into a tough spot. They had a budget to hire several automation engineers. But before hiring anyone, the team had one of its developers build an automation framework as a proof of concept and used the requirements from it to create the job description. Naturally, the developer created the proof of concept in the same programming language used to build the application. The management team thought this was a great idea, because then the developers could also contribute to the automation framework.

In theory, that sounds wonderful. But this was a team where the developers barely wanted to write unit tests. Yes, there were definitely deeper cultural issues here, but they certainly shouldn't have made automation design decisions in hopes that the developers would contribute to it. 

The team had a difficult time finding automation engineers who wrote in the same language as that used to build the application, so everyone it hired ended up being new to the language.

Management's thinking was, "Oh, they can adjust easily enough." But if the automation engineers are the ones working with the code base day in and day out, why shouldn't it be in a language that they already know?  And if by chance the professional developers did pitch in (which they never did), then one would think that it would be easier for them to adjust to coding in a new language than it would be for testers.

Needless to say, the automation engineers had a tough time getting the project off the ground. They were learning a new product and a new language at the same time, with little to no help from development. So for me, this made turning the project around a much harder task.

What has experience taught you?

These are just a few of the hard lessons I've learned throughout my journey in test automation. What experiences have you had that led you to your own list of dos and don'ts in this area? I look forward to hearing your stories and comparing notes.

Keep learning

Read more articles about: App Dev & TestingTesting