5 mobile app testing strategies to avoid

Think your app is bug-free? Be careful—nearly half of defects are discovered by users.

Why aren't developers finding these flaws? Blame bad testing. Some of the most popular test strategies can actually undermine your app.

Luckily, bad tests are easy to avoid. Here are the five most common mobile app testing mistakes and what you should do instead.

World Quality Report 2018-19: The State of QA and Testing

1. "In the wild" testing

When apps are tested "in the wild," developers throw caution to wind, launch the app, and see what happens. In essence, real users are turned into beta testers.

Why to avoid it

In the wild testing is dangerous for a few reasons. First, you have little control over the user experience. You just released an app without knowing how it will respond to user actions, network conditions, or marketplace demand.

This is a huge risk. If the user experience sucks, the app's reputation and your brand image are going to suffer.

Second, in the wild testing doesn't have a systematic way to record and address problems. Even the most loyal customers can't be depended upon to report crashes and other snafus in a consistent manner.

As a workaround, some developers hire testers to provide feedback on their app.

The problem?

All app testers, whether organic users or crowdsourced employees, are using the app under uncontrolled, variable conditions. What causes the app to crash on a 3G connection in Germany has little to do with what caused it to crash using LTE in Brazil.

So steer clear of tossing your app out there and letting the chips fall where they may—chances are you won't be able to track what happens, let alone solve the issues that are sure to arise.

2. Wardriving

Welcome to wardriving, a technique that's a really great testing idea...in theory.

Developers hire testers (uh-oh, we're already off to a bad start) to test the app in the wild (double uh-oh).

But instead of using the app whenever and wherever they happen to be, testers are directed to evaluate the app by walking around inside certain buildings or driving around a particular neighborhood to see how it performs in various geographical and network locations.

Why to avoid it

Wardriving is slightly more controlled than in the wild testing, but conditions still aren't ideal.

Even if you hired a substantial number of testers to perform wardriving in their respective cities and neighborhoods, their end-user experiences would be the result of conditions hyper-specific to them and them alone.

The location, device, and networks involved in the app's usage all create an individualized experience only applicable to that person on that particular day and time.

In other words, any data collected from Bob, who's testing the app in Toledo using a 3G connection on a Tuesday afternoon, can't be applied to Suzie, who's using the app on a Wi-Fi connection in San Francisco on a Wednesday morning.

Wardriving is a sneaky testing method because it feels great to get real data from real people in different conditions. The problem is that those conditions can't be applied to all users across the board, rendering this testing method only partially effective.

3. Going bonkers for bandwidth

Once you realize that any form of in the wild testing can't be relied upon, it's time to trudge into the lab and start emulating real-world conditions in a controlled environment.

Many developers and businesses go into the testing lab but don't go far enough, engaging in what's known as "partial emulation" testing. Partial emulation captures some—but not all—of the real-world conditions that impact the app.

Why to avoid it

Partial emulation often overlooks important environmental factors, creating an incomplete test that doesn't capture the full range of user experiences.

Consider network bandwidth. Apps are often tested under static bandwidth constraints, but bandwidth is rarely static in real life. A user may switch back and forth between 3G and 4G, 4G and Wi-Fi, or experience signal strength that's constantly fluctuating.

Latency is also an important factor—for many applications it's the main determinant of performance. Like other aspects of the mobile environment, latency is highly dynamic. It depends on factors like handshakes between routers and other network equipment, coding techniques, and network protocols.

Since the real-world mobile environment is so varied, there's little value in creating static conditions for mobile app testing in the lab.

4. Ignoring jitter

Jitter can be difficult to represent in a mobile app testing environment. The static variables for bandwidth or latency are easier to create. As a result, some tests downplay the value of jitter.

Why to avoid it

Test your app without considering streaming needs, and you run the risk of a seriously disappointing end-user experience (not to mention losing potential revenue and referrals).

When evaluating how your app is performing, there are two key areas to consider: how the app itself is operating, and how the app operates when used on a particular network. Put another way, ignoring jitter means you're ignoring network performance.

Thanks to its heavy bandwidth, video is particularly susceptible to jitter. Streaming quality is heavily influenced by location, network type, service provider, and other factors.

All of these factors must be accounted for in your testing strategies so the user experience is seamless and uninterrupted.

5. Sterile functional testing

Sterile functional testing occurs when developers test only those elements of the app that are functional in nature and fail to incorporate performance into the testing process.

Why to avoid it

The success of a mobile app is not based on functionality alone.

The way an app functions has to do with what (i.e., what happens when a certain feature is selected or a certain button is pressed). The way an app performs, on the other hand, has to do with how (i.e., how quickly the app responds when used on a particular network).

Measuring what happens when a user issues a command is therefore a functional test. Measuring how quickly the app responds to a request, on the other hand, is a performance test.

Both function and performance have to be tested in order to get a three-dimensional view of overall app capability.

While you certainly want your app to function properly, testing functionality without accounting for performance will never give you the entire picture of what your app can (or can't) do. And as we've seen thus far, overall app performance is heavily influenced by outside factors such as network performance.

This means that function, performance, and outside influences must all be accounted for in order to create the most accurate tests possible. Be sure to consider the following:

  • What network conditions are virtualized?
  • Are those conditions based on the real network?
  • Are you simulating multiple network conditions?
  • Are you representing distributed user groups?

The last point is crucial. Functional testing often overlooks the need to virtualize different constraints for different users.

Cloud-based mobile app testing should also be entered into with caution. Keep in mind that functional testing in the cloud can only provide insight for a single location and can't represent an entire user base. Cloud-based tests also don't give an accurate picture of how the app will function on networks used by real people, since connections in the cloud tend to be much faster than home or other networks.

What to do instead

So if in the wild testing, wardriving, static bandwidth testing, partial emulation, and sterile functional testing are to be avoided, how can you accurately test your app before launching it in the marketplace?

1. Do your homework

There's a lot of legwork to do before you even begin testing your app.

You first need to get intimately familiar with the various factors that impact functionality, performance, and user experience.

Research network conditions, infrastructure, user locations, and other environmental conditions that need to be taken into consideration once testing begins.

Having a thorough, three-dimensional understanding of how, when, and where your app will be used will help you create a virtual testing environment that accurately represents the real-world user experience.

2. Go virtual

Create virtual testing conditions that account for all the factors you discovered during the previous step.

It's especially important to create virtual network conditions that account for a wide variety of variables commonly experienced by users in real life.

The virtual networks you create should also be seamlessly integrated with functional testing tools in order to further enhance the authenticity and reliability of the tests performed.

3. Analyze and optimize

Next, it's time to analyze your results. Look for glitches, both in terms of function and performance, and consider workarounds for any errors that can be attributed to network malfunctions.

Finally, develop systems to optimize your app based on the analysis of the tests performed.

Do mobile app testing right

In order to ensure optimal performance of your mobile app, it's crucial to create a testing environment that accurately reflects real-world conditions.

  • Never launch an app without prior testing
  • Don't waste time and money on wardriving
  • Remember that real-world bandwidth tends to be variable
  • Consider how video networks may impact app functionality and performance
  • Always test for performance as well as functionality
  • Thoroughly research environmental factors that impact your end users
  • Create three-dimensional, real-world testing environments
  • Analyze your test results and develop systems of ongoing optimization

By considering how your app functions and performs under real-world conditions, and by emulating all of those conditions with virtual testing, you'll be able to accurately and effectively predict how end users will experience your app once it's launched.

Topics: Dev & Test