Will AI bots steal your QA testing job?
About a year ago at a major testing conference, five executives sat in front of about 300 testers and declared adamantly that machine learning, a branch of artificial intelligence, would take over software testing.
Were they right? Yes and no. Machine learning won't necessarily eliminate testing jobs, but it will change how the work gets done.
In the nearly 60 years since machine learning was first envisioned, it has been applied to many fields. Since 1991, machine learning has been used to learn how to identify cancerous tumors in kidneys, and today it is being used to identify other types of cancer as well. Since 2010, it has been used to teach driverless cars where the edge of the road is. It has been used in finance since 1992 to trade securities. Insurance underwriters and reinsurance companies use it to project potential losses due to natural disasters. It has even been used to determine whether you should get a loan.
Oh, and did I mention it’s being used to test mobile devices?
The end of testing as we know it?
Appdiff is one of the pioneers in AI-assisted mobile testing. Jason Arbon, CEO and founder of the company, says Appdiff built its AI-driven mobile testing platform to enable mobile application testing without any human involvement. The first challenge was to teach a bot how to do gestures with iPhones and Android phones. Next the team had to learn about specific applications. What actions could these agents take on different apps? How could it navigate through apps? What types of inputs should it use in input fields? These are all challenges the Appdiff team has overcome.
But learning how to interact with a mobile app isn’t what testing is. Testers know this. There’s more that we do. We have an understanding of the business domain, a set of heuristics for exposing defects. We know how to think like the best and worst users of the application. We look out for our company’s best interests and those of users. We do this through exploration of applications and thinking through eventualities. What I’ve listed above is little more than clever fuzzing of an application. That’s not testing, and it’s not very smart.
But Appdiff didn’t stop with “advanced fuzzing.” It taught its machine-learning algorithms how to know if the outcome of a given action was likely to expose a defect. They learned how to know with a high level of certainty when an action and result appeared to deviate from expectations.
That’s starting to sound a lot more like testing, isn’t it?
Arbon claims Appdiff tests about 90 percent of the surface area of a typical mobile application. How does that compare to human testers? It is rare, he said, that companies with human testers test as much as 90 percent of the surface area of a mobile application. And as for the last 10 percent, it's either too costly or too complex for most companies to invest in testing it. Further, there are very few testers who can interact with an application quickly and deeply enough to reach that last 10 percent. So even when companies have the interest in reaching the last 10 percent of the application’s use cases or functionality, what is the likelihood they have someone available with the skill set to do it?
What all this means is that Appdiff does what most testers would do anyway.
Starting to worry? Arbon, who also penned the book How Google Tests Software, says testing is harder than writing software. “You have to be smarter than the programmer to find problems in the code.”
As a tester, I love the sound of those words. As a software engineer, I’m skeptical. Jason predicts that “writing software is a field ML will conquer before it will conquer testing.”
The human differentiators
George Neal, chief analytics officer at PrecisionLender, says AI won’t take over testing. But he also says that testing is going to get much harder as we introduce machine learning into applications because we won’t know what the application is supposed to do in all cases. With the most difficult problems, machine learning will be making choices based on likelihood, not certainties. Testers, we’re not in Kansas anymore.
“For people who don’t like to do what humans do well, the future is a very scary place,” Neal says. The idea being that humans are very good at creativity, exploration, understanding, analysis, and the application of knowledge. So folks who don’t like these activities are going to find challenges in doing what they like in the future.
Like Arbon, Neal answered no when I asked him whether machine learning would take over testing. But, he added,“testing will get harder.”
Throughout the lifespan of software development, we’ve practiced the discipline of testing as a deterministic activity. A computer would create only those results that we could decide with confidence were right or wrong. With machine learning this changes. Machine learning introduces non-deterministic results to larger, more complex problems. In the past, as testers, our most difficult non-human-to-human testing activities were non-deterministic ones, such as reproducing preconditions necessary to expose defects in multi-threaded environments. But as machine learning crosses into the mainstream of software development, we’ll see non-deterministic behaviors become more prevalent. As testers, how will we embrace the challenge of exposing defects in application results that don’t have a right answer?
Staying ahead of the bots
Whether testers will be replaced by AI bots as our field moves forward appears to be as unclear as the new problem sets we’re beginning to encounter as testers. How will we as testers adapt? What is it we need to do to stay ahead of the machine learning curve? How will we continue to reduce risk for our companies in an age where uncertainty is certain to proliferate?
I can’t tell you if AI bots will take over your testing job. What I can tell you is that your testing job is about to change dramatically.