Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Why separate test automation teams don't work

public://pictures/Frein_Headshot.jpg
Stephen Frein Senior Director, Software Engineering, Comcast
 

They were the "A-Team"—a recently formed group of test automation professionals who were working in a new and fast-moving product ecosystem. The development teams had been creating product increments for months, and the centralized QA team had been absorbing these and running manual test suites against them.

Everybody knew that purely manual testing would be too slow for the intended release cadence, and so the testing had to be automated as quickly as possible. 

The A-Team was charged with taking the test cases from the manual test teams and automating them. By the time they started, the team had a large backlog of cases to work through, so they established a series of aggressive, but achievable milestones. After making some technical choices, such as development languages and an automation framework, they got to work.

At first, progress seemed promising. The team was able to produce and demo a set of test cases at a pace that closely resembled initial projections. However, once the low-hanging fruit was gone, they began to encounter challenges. The test cases they were working on started to seem increasingly half-baked, so they had to spend additional time figuring out what automation they should be doing.

Since the development teams were feverishly producing new functionality, it was difficult to get their guidance. Also, the products were hard to test in an automated fashion, so the A-Team had to resort to brittle techniques, such as screen scraping and browser driving, when it would have preferred to use APIs. The team's progress kept slowing, and its automation kept breaking as the products changed constantly beneath the automation. 

Most of the time, asking one team to automate the testing of code produced by another one does not work. Here's why, and a better approach.

Could this happen at your company?

The A-Team tried to stanch the bleeding by meeting more frequently with the development teams and sending representatives to their Scrum meetings. The team explained its test cases to the developers and requested advance notice when they were changing related functionality.

The developers made a good-faith attempt to foresee coming impacts to test automation. But as the amount of automation grew, it was harder for them to remember the nuances of tests they were not personally building and running. They would have liked to help out more, but they were behind with their own work and had to race toward a promised launch date that was looking increasingly improbable.

Months into the automation effort, everybody was looking for ROI, but there was little to be found. With each new release and subsequent automation run, there were many failed cases, but it took a while to sort out which were real issues with the product and which were due to test environment peculiarities or automation rot.

The A-Team was falling further and further behind the projected schedule, and executives were asking uncomfortable questions about the progress and value of its work. Though the team had many valid reasons for the slow pace of its progress, nobody felt good about how the effort was going.

The wrong people were doing the work

You may recognize the story of the A-Team all too well, and wonder if it was based on your own organization. Most groups that spin up a group such as the A-Team wrestle with similarly unsatisfying outcomes. This is usually not because the personnel on such teams are incompetent, but because of an organizational design flaw. 

Having one team develop a product and a separate one write automated tests for it is a common pattern that almost always ends badly. Organizations fall into this pattern for several reasons:

  • The team working on the core product code is too busy with "important" tasks such as building new functionality and can't afford to be slowed down by writing automation.
  • Automation is seen as a specialized skill, either because of interest and background (e.g., "developers don't think like testers") or because of particular tools and frameworks in use.
  • Automation is an afterthought, begun after a considerable amount of code is already written, and so a separate, dedicated team is used to help catch up.

On the face of things, these motivations may appear reasonable, but having a separate team automate tests for code others wrote is almost always a bad idea. 

Not-so-fast feedback

One of the main benefits of automated testing is fast feedback. Mistakes become less expensive to correct the sooner you catch them. A developer will be better-equipped to debug and fix code written this morning than code created five days ago, and this situation deteriorates even more dramatically once code has made it to production.

Having automated tests written by a separate team inevitably slows down the feedback cycle. Developers should be engaged with a bug when they are best-equipped to do so.

There is also overhead in keeping a separate team apprised of changes that affect automation, and it rarely happens successfully. It takes meaningful work to consider the automation maintenance implications of each change being made, and even more work to discuss these with a separate group that has different leadership, routines, and priorities than you do.

Also, without detailed study of the automation code, development teams won't always know which changes will affect automation. For example, adding a new <div> tag to a web page may break XPath-based element identification, but may not break CSS-based selectors. Unless developers know how the automation is written, it can be hard to foresee when they are doing damage to it. 

Also, when the core development team isn't responsible for test automation, it is less likely to build the system in a way that supports automation. The developers may not bother to ensure the presence of unique element identifiers in web interfaces or to provide separate APIs for functionality that the users will ordinarily invoke through a GUI.

Such omissions inhibit test automation efforts and hurt the product itself, since measures that benefit test automation (e.g., loose coupling and separation of concerns) tend to be good development practices in general. An easily tested system is typically an easily maintained system.

A better path to automation

If having a separate test automation team is an anti-pattern, then the preferred approach becomes clear: The same person who writes code that implements the functionality should also be the person who tests that functionality.

Automation code needs to become a first-class part of the core codebase, written by the same team developing the functionality, and it needs to evolve along with that functionality, rather than being tacked on at a later time.

The more you separate the implementation of functionality and associated test automation, the harder it is to produce and maintain that automation and the more you dilute its benefits, including fast feedback and the reinforcement of sound architectural practices.

Development teams should be accustomed to the ideas that automated testing is part of the product and testability is a first-class, nonfunctional requirement for which they need to make provisions. When they change the capabilities of the product, they need to evolve the test code accordingly, just as they would to keep any other interdependent parts of the software in sync.

An exception for end-to-end testing

Although it makes little sense to have one team automating the tests for code that another team wrote, you'll sometimes have tests that go beyond the work of any one team. That is, you might have ecosystem-level, end-to-end test cases that knit together the work of multiple teams.

While we would want each of those teams writing its own automated tests for direct interfaces with their sibling systems, it would be duplicative for each of them to write a set of high-level, end-to-end cases meant to validate the continuing functions of the broader ecosystem.

In that case, moving the automation of such tests out to a separate team with overarching responsibility probably makes sense. 

Keep learning

Read more articles about: App Dev & TestingTesting