Micro Focus is now part of OpenText. Learn more >

You are here

You are here

The future of software testing: How to adapt and remain relevant

public://pictures/Matthew-Heusser-Managing-Consultant-Excelon-Development.jpg
Matthew Heusser Managing Consultant, Excelon Development
 

Does having human testers slow down the performance of a software development group at all? That's what Yahoo seemed to claim in a recent IEEE Spectrum article that discussed how it eliminated the tester role from the organization. Its primary claim was the following: 

"What happens when you take away the quality assurance team (QA) in a software development operation? Fewer, not more errors, along with a vastly quicker development cycle."

While the IEEE article claimed that having traditional testers is detrimental to team performance, you need to understand the full context. While traditional software testing roles in QA teams may be going away in some organizations, the work testers do isn't. If you understand what changes are coming, why it's happening, and how to hone your skills and adapt, you'll not only survive but thrive in this new environment.   

The problem with traditional testing groups

The process described in the IEEE article involves a development group handing off work to a testing group, with both groups primarily coordinating through paperwork and bug tickets. This situation creates a delay in the find/fix/retest loop. If the programmers have a high level of work in progress, there can be a delay between find and fix. Likewise, if testers have a great deal of work to test, there could be another delay between fix and retest. Written communication allows only slow feedback, which leads to arguments about whether the issue is actually a bug, should be fixed, works on the developer's machine, and so on. The article implies that the testing work involved retesting of the same thing over and over, following the same steps. This is sometimes called scripted/manual regression testing.

About the time that the extreme programming trend took off, Elisabeth Hendrickson was demonstrating the risk associated with a multiteam model. In her 2001 Software Measurement Conference presentation, "Better Testing—Worse Quality?" she suggested that programmers who knew that someone else would be checking their work would be less likely to check their own work, leading to lots of bug filing, fixing, and retesting.

Due to poor coding practices, this could lead to more bugs, repeating the cycle. Hendrickson's work included a systems effect diagram showing that some of those consequences are natural, while others happen due to a management choice. For example, management, driven by deadlines and only looking at "code complete," may push programmers to go faster. Forced to steal velocity, programmers may then do shoddy work and skip testing. Why not skip it, the thinking goes, because isn't that what the test department is for?

That's not the only problem with having a traditional test department.

Batches, delays, handoffs, and touch time

Don Reinertsen's book, Principles of Product Development Flow, discusses two competing issues: transaction costs, which are the costs to get any piece of software released, and holding costs, which are the economic costs of not releasing. For physical goods, this includes things like the cost of warehouse space; for software, it's the cost of delaying a product—the opportunity cost that could be realized if the software were just out. According to Reinertsen, the rational choice is to find a sweet spot in the intersection of these costs and to release less often, reducing the number of transactions per year, but not so rarely that you never make any money.

If testing takes three months, you aren't going to release every day. (If you did try to release every day, you'd get just under four releases a year, each of which has one day of new features.) Instead, you'll release maybe twice a year and spend half your budget on testing. An executive with a new feature request might get the feature in four months, if they ask at the right time and can jump to the front of the line. Most likely, however, the executive needs to get the request in during planning, before the development cycle starts, making the delay from request to production more like seven months.

Knowing that there is a huge delay from idea to production, executives strive to get their ideas on the list as soon as they come up with them, making the backlog grow further. Because releases and patches are expensive, the test team wants to get the release right, so the duration of the test process increases, the transaction cost goes up, and it makes sense to release less often, with bigger batches of work. Bigger batches mean more unverified decisions, which mean more errors, which need more fixes, which take more time...and the cycle continues.

A few years ago, I had a client that spent about half of their time in a massive test/fix cycle. We talked about planning windows for meetings, saying things like, "It will have to be before August or after October." All of that drives up costs, but it is not the only way to think about delivery.

Let's go back to that example with the three-month release candidate testing. Say we really only did one day of development, that the release was thematic and hit only one subsystem, and that we had a way of isolating that subsystem so the changes did not damage other systems. Imagine further that you had techniques to monitor production and could easily roll the change back. Would you really spend three months release-candidate testing everything? Probably not.

The traditional, large-batch testing can sow the seeds of its own destruction, not just from a quality view, as Hendrickson pointed out, but also from an economic perspective.

New model: Testing and continuous delivery

The ideas above, taken to an extreme, point to the continuous delivery that Yahoo is advocating. Automated tools can take any code change, perform a build, and push those changes to a personal or staging server. That does not mean that the code automatically goes to production, a process which could be called continual deployment. Instead, a workflow engine can notify someone that the work is ready, and that person can then test that new piece of work. As Carmen DeArdo pointed out during the recent DevOps Enterprise Summit, the delivery model (continuous or traditional) does not matter, as long as delivery is consistently managed through the same tool. That means some teams can go to continuous delivery, while others release weekly, and regulated or waterfall projects release twice a year, without sacrificing the quality of information for decision-makers.

What should you make of all this? First of all, understand that companies don't shift to DevOps overnight, and even if they did, testing work would remain. Someone has to explore the software to find bugs and reduce risk. The ability to isolate a change into smaller and smaller batches, making release cheap, and the ability to roll back make it tempting to give the entire delivery, end-to-end, to a pair of programmers. Testing still happens; it is just not a specialty role on the team. Eliminating the test "handoff" removes a delay from the process and speeds it up.

Is testing dead?

It has been four years since James Whittaker, then a director at Google (now Microsoft), stood up at a STARWest event and proclaimed that testing was dead. There has been a great deal of rhetoric, including Google directors dressed up as the grim reaper. Now that Microsoft and Yahoo have eliminated traditional test positions, the current state of rhetoric is that automation is the future. 

While these rumors of the death of software testing persist, they're greatly exaggerated.

What might change is the easy, repeat-the-steps, follow-the-process testing jobs supported by a test organization with its own managers, directors, and vice presidents of software quality. What becomes of these testers? Some might become coaches or "smoke jumpers" who contribute to projects in need. Overall, however, the ratio of testers to developers will continue to shrink and the average skill level required will be higher. In some organizations, traditional testing jobs may become exceptionally rare and may go by different titles. 

The question for testers today is not if the role will exist; it's if testers are willing to make the investment in remaining relevant. For example, the ability to spot problems in production, risks in requirements, or a combination of changes that could destabilize the system are all critical.

The new generation of testers must adapt their strategy to find emergent risks. That is, instead of running classic "check everything" regression-test processes, testers need to find out what is different and what matters for the release. That could include:

  • Checking version control for what actually changed in the release
  • Talking to developers about their concerns
  • Studying production logs for what features customers are using
  • Taking stock of the system's performance in production and what features are degrading over time

The point is to create a custom test process, one that is powerful, doesn't slow development, and provides feedback early. It should provide feedback not only about the code, but also about all the stages of the process. That includes fleshing out the details on features to reduce defects on the first build, and identifying the unintended consequences of requirements.

That's what you need to do to survive—and thrive—as a tester in this changing climate. So buckle up, but try to enjoy the ride. 

Keep learning

Read more articles about: App Dev & TestingTesting