Why your test automation is ignored—and 5 steps to stand out

Bas Dijkstra Test automation speaker and writer

Comedian Steve Martin, when asked if he had any advice for novice comedians, said, "Be so good they can't ignore you." Later, this quote became the title of a Cal Newport book on finding and doing your best work.

Now, even though this advice from Martin and Newport was targeted at humans, in a sense, it applies to test automation, too.

Before I go deeper into why I think this is the case, though, here's how I have seen the evolution of one too many test automation projects I've witnessed or taken part in during my career, in phases.

1. Enthusiasm

Nobody starts a new test automation project with the intention of it becoming a failure. And often, in the initial stages of the project, the future indeed looks bright. A first set of tests is created and (hopefully!) integrated into a build-and-delivery pipeline.

The tests give the information that the team hoped for, and everybody is happy—we're succeeding at this test automation thing!

[ Also see: How to get your test automation right: Go big-picture on quality ]

2. Enter the invaders

Building on the initial success, the team starts adding more and more tests to increase coverage for greater confidence with every build. At some point in time, one or more tests start failing every now and then, without any apparent root cause in the application under test. (This phenomenon is also referred to as a "false positive.")

Because the team is too busy building new tests and delivering value to the customer, these outliers are marked "ignore." A phrase increasingly heard around the office is, "Oh yes, we know that test fails from time to time. We just haven't had the time to dive into it." 

3. Decay and dismay

Slowly but surely, more tests begin to fail, first intermittently, then more or less permanently. Still, the team is too busy building new tests and performing other important duties to address this test automation rot.

Oh, and the fact that the principal engineer responsible for designing the initial test automation solution left to help out with another project doesn't help, either. The value the test automation suite brings to the development and delivery process dwindles fast. More tests, even entire test suites, are moved to the "ignore" pile.

4. Abandonment

Once the test automation rot has spread wide enough, chances are that the team will begin to jump ship and abandon its test automation efforts altogether. What's left is a pile of test automation code (or other types of test automation artifacts) that are no longer of any value, despite the time and effort put into their creation and initial stages of maintenance.

5. History repeating (optional)

At some later point in time, the team (or a new manager in town) decides once more that they can no longer do without test automation. So somebody picks up a new tool, creates a first set of tests and (hopefully!) integrates them into a build-and-delivery pipeline, and—you guessed it—the story starts again from phase 1.

The above might sound like a worst-case scenario, but I've seen variations on this script play out one time too often with clients.

How do you handle these situations? What can you do to make a change for the better when you notice that test automation results are starting to be ignored? Even better, what can you do to prevent test automation rot in the first place?

[ Also see: The state of test automation tools: Top trends and challenges for 2018 ]

How to turn the tide

One strategy I've applied successfully several times when dealing with a large number of untrustworthy tests of low value is to follow these steps:

1. Filter out unreliable tests

Eliminate all tests that do not reliably give the information that you expect them to give (the false positives). That will leave you with a smaller test suite that you can rely on—and will form the base on which you move forward.

2. Make your reliable tests fully functional and visible

Make sure all of the reliable tests are fully functional and highly visible. Incorporate them into your build and deployment pipeline. Automatically send out test results to stakeholders.

Make test results visible on a screen in your team room, and, when possible, outside of that room as well. Get these tests noticed. Make people aware of their existence and importance, and make sure that all stakeholders start trusting the running tests again.

3. Identify low-effort tests that can be trusted

Pick some low-hanging fruit from your pile of ignored tests—e.g., some tests that, with relatively low effort, can be fixed again. Fix them well enough to be trusted once again.

If you don't have time to do this, don't be afraid to moonlight a little to get some of them up and running again. The goal is to rebuild trust in the test automation process and change the vibe back to a positive one.

4. Show stakeholders the money

Show all of your stakeholders that the test automation suite's coverage has increased with the re-addition of these newly repaired tests, and that this has resulted in greater confidence in the quality of the product.

5. Rinse and repeat

Repeat the previous two steps until your test automation coverage reaches acceptable levels again. If you're stuck with tests that are hard to fix, consider throwing them away. There's no shame in doing so; in fact, I prefer having no automated test at all to having one that's unreliable and keeps asking for attention.

The ultimate goal is to have a fully functional, reliable, and trustworthy set of automated tests once more. Trust is easily lost but hard won, so it might take some time to rebuild that trust.

But don't let that keep you from doing the grind work. The goal is a trustworthy set of automated tests that adequately covers risks during development and deployment, so keep that in mind.

Prevent test automation rot in the first place

As with so many things in life, prevention is better than fixing after the fact. To prevent having to go through the (often long) process described above, whenever you start smelling test automation rot from a small set of tests that start acting up, make fixing them your highest priority. Put off building new tests until you've fixed the problem.

And remember one possible solution: You can just throw away the rotten test entirely. What you want to prevent at all costs is the phenomenon described in the "broken window" theory, where the existence of one failing test provides a reason not to fix others, ultimately letting the problem get out of hand.

When you take care of your test automation hygiene and deal with malfunctioning automated tests as soon as you see them popping up, your chances of maintaining everyone's trust in the information they provide will be significantly higher. Make your tests so good they can't be ignored.

Meet Bas at TestBash Netherlands 2019 on May 23, where he will will be running a workshop on "Investigating the Context—How to Design an Effective Automation Strategy." 

Read more articles about: App Dev & TestingTesting

More from Testing