Micro Focus is now part of OpenText. Learn more >

You are here

You are here

6 ways to rightsize your tests with analytics

public://pictures/Ori-Bendet .jpg _0.jpg
Ori Bendet Inbound Product Manager, Micro Focus
 

The way organizations develop software has changed significantly over the last few years. From agile to DevOps to continuous everything, developers are running faster and developing more content in less time.

As a tester, you've got to keep up. You must enable the business to run faster and decrease time-to-market, but without damaging the quality of the product, which would negatively affect the value of your brand. Users expect fast updates, fixes, and feature enhancements to products that they use and love. And you need to achieve all of this while reducing costs. So how do you rightsize your tests to strike the right balance?

Testing without a plan can result in high costs and time spent on areas of the product that yield little value. But it’s possible to reduce the amount of testing you do significantly while maintaining reasonable levels of confidence through regression. Below I offer six tips for striking that balance. First, however, you need to understand the challenges.

 

The challenges in test management, design, and execution

As a test engineer, you work with modern apps that are no longer simple client/server software. Applications often involve multiple services, sometimes from third parties, served from cloud infrastructures. Development teams are moving away from waterfall methodologies and into constant feedback and continuous testing throughout the development cycle.

What's more, different testing teams are bringing their own tools and processes; every team in the organization defines its own tools and uses a combination of vendor-based and open-source software to accomplish its testing goals.

In addition to complicated test domains, test environments are increasingly complex. Users operate on a wide range of mobile and desktop hardware, as well as on the many different software offerings available in each ecosystem. Fortunately, test engineers can use business data to learn about their users' production environments instead of making educated guesses about what customers are using when creating tests.

With more features and fixes to test and less time to test them, you need rapid development and feedback cycles, achieved through robust test planning, that occur in seconds or minutes, not days or weeks. Here's how to achieve that.

Use analytics to mitigate risk

Test managers dealing with applications of increasing size and complexity know that they can’t test everything. Even if you had the resources and time to test everything, that would not be a wise business decision. On the other hand, making and justifying decisions about what to test—and what not to test—can be difficult.

Analytics offer a scientific solution to the problem of determining how to reduce the scope of regression tests while mitigating risk. Here are six ways that you can use analytics to create a strategy for regression that will significantly reduce the number of tests you need while maintaining the confidence level of the regression tests—all without hurting the quality of the application under test. 

1. Use analytics tools 

You can use an analytics tool, such as Google Analytics, to gather information about your product that you can use to optimize testing environments. Analytics can help pinpoint the highest-risk areas of your product for regression testing and validate your initial assumptions and concerns about your product. This effort breaks down into two parts:

  • Environment breakdown: Find out which browsers or mobile devices the highest percentage of your customers use and focus your testing efforts on those.
  • Demographics breakdown: Understand your customers. Who are they, where are they from, and what networks do they use? This will help you understand how frequently users are switching to new devices and operating systems.

2. Deep dive into your product 

Add secondary dimensions in analytics to understand the optimal combinations that reflect what customers are using in production. For example, you might combine analytics about the most popular browsers with analytics about the most-used features to get a better understanding of the highest-risk, highest-impact areas of the product. Use the secondary dimensions available in analytics tools to determine how much testing you need to do, and where, in order gain the confidence and coverage levels you need.

3. Focus on user behaviors

Study user behaviors to find the most active areas of the site and to pinpoint how users regularly engage with your product. Do you have only an hour to test your application? Use this breakdown to ensure that you understand what your users consider to be the most important feature in your product and focus on that. Then use the data from user engagement to determine what areas you need to test based on user behavior. Use page interaction rates, including where users spend the most time, and bounce rates to determine which parts of your product might present a problem. Looking at data for all pages or features, and testing your most commonly used features, will give you a high confidence level.

4. Apply analytics everywhere

When testers and managers hear the word analytics, they might think automation. But you can apply analytics to any kind of testing as a way to decide which tests to run on which pipelines. Test managers can even use analytics to help balance resources strategically by determining which test strategies to apply to which parts of the product.

5. Consider other analytics tools and data sources

Data is available everywhere, and you can use it to learn about areas of your product you didn’t test in regression. Customer service cases and complaints can amplify problems lurking in the software that may need your attention. These cases are often tracked, which means they are searchable and usable for customer feedback analytics. And defect-tracking tools that gather analytics about escaped defects in production can provide additional feedback. These additional data points can also help inform your future testing strategy.

6. No web analytics? No problem

It's okay if you are working on a product that does not use a web analytics tool to gather data about your users. Just gather user information through regular user surveys, market analytics and statistics, or customer validations and feedback. You can analyze data from each of these sources to direct your test strategy by determining your application’s most common use cases and most popular features. One common heuristic you can use to determine what to triage is Karen Johnson’s Recent, Core, Risky, Configuration, Repaired, Chronic (RCRCRC)

These six strategies do more than help create a test strategy; they can collapse an exhaustive test strategy into a “just-enough” test strategy that finds the most important problems quickly while reducing your test efforts by up to 80%.

Calculate risk to help reduce your test effort

Take as an example a 10-person team pursuing a “test everything” strategy. It plans to test 30 features, each of which requires a half-day of testing time on 10 platforms. That adds up to 150 person-days per test cycle and will take the team 3 weeks to complete. But if you reduce that number to 12 tests on 2 platforms, the team can complete one cycle in a little over a day.

Automation can also help by testing certain core flows. Rotate which flows testers use (with something such as RCRCRC) to improve coverage over time.

Rightsize your tests: An example

Here's one way to determine testing strategies: Use social media responses as analytics. Due to the ubiquitous nature of social media, consumer feedback is a more powerful metric than ever. According to LNS Research, quality issues in the PlayStation 4, which occurred at a rate of 0.04%, were suddenly the main story about the console shortly after its release back in 2013.

Why did this happen? While the rate of defects was considered acceptable in the manufacturing process, test expectations were not aligned with user expectations. Had Sony considered analytics as it designed and tested the product, the dev and test teams might have devised a strategy that addressed consumer concerns while keeping costs in check.

As a new console, the PS4 had the classic new-product challenge with user analytics: No analytics exist for a product that does not exist. What Sony did have, though, was a previous product with similar games and scenarios. First-person shooter, real-time strategy, race-car driving, and sports games are roughly the same from build to build, while most other PS4 features, such as streaming Netflix, had been available on the PS3. A targeted test of the most commonly used features would have shown significant, brand-threatening quality problems.

It’s always possible that management, motivated by deadlines and reports to shareholders, will decide to release a product earlier than it should. At least with testing done well, management could make an informed decision. Even in the case of a poor release decision, the organization can learn, and then listen to similar feedback next time, as Sony's team probably did when it improved the release quality of the PS5.

There’s always hope.

Drive your testing with analytics

The challenge of software testing lies in finding problems that your customers think are important as early as possible. Exhaustive test approaches, if you can do them at all, are expensive and provide a great deal of information, some of which may not be relevant. The "do everything" approach also tends to push software delivery dates out.

Instead, find the 20% of scenarios that represent 80% of your use cases. That's a surprisingly easy task, once you have the right analytics in place. Get started by following the six steps above and you'll be well-equipped to get your testing mix just right.

How are you using analytics to support your testing strategy? I welcome your comments and questions.

 

 

Keep learning

Read more articles about: App Dev & TestingTesting