Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How formal mobile testing boosts your chances of app success

Erik Sherman Journalist, Independent

When it comes to app success, quality should be job number one. Users have no tolerance for slow, buggy apps, and negative reviews can kill mobile apps.

And yet bad quality is rampant. Even household names like Applebee's and Rite Aid have buggy apps that alienate customers.

Addressing the mobile quality problem has been expensive. According to a study by Capgemini and HP, QA and testing has risen from 18 percent of total IT spending in 2012 to 35 percent this year. Should the trend continue, it would hit 40 percent by 2018. Companies can't afford to spend more.

What is necessary is smart QA and testing. When it comes to mobile, that means developing an intelligent formal testing approach that:

  • Works with agile development
  • Supports the fast turnaround on new app versions
  • Handles the plethora of platform, OS, loaded app landscape, and network interactions
  • Solves problems efficiently and effectively

Here are some steps to institute the testing you need.

Survey the landscape

Testing apps without considering users and context is like trying to build a house without a blueprint. You don't know what people will want.

"Where testing falls short is in the discovery phase of understanding the persona of the users that you're going to be building this app for," said Ali Manouchehri, CEO at MetroStar Systems, which develops mobile applications for federal and general businesses.

Understanding users includes knowing what they might expect in function and performance from an app. By surveying the user landscape, you can also learn what types of devices they have and how they use them. Can developers count on a fast Internet connection? Relatively current versions of hardware and operating systems?

And then there are culture and politics. "With mobile applications that are supposed to be used globally, you might create an application specifically for China or Korea," said MK Tong, general manager of Beyondsoft. "Because of the cultural differences, they have differences in expectations."

User interface design and data presentation need to consider complex political issues. Do you call Taiwan a country? A region? It depends on whether the user is on mainland China or Taiwan.

You also need to consider other apps in the same space. "How do we benchmark against competition?" said Sudheer Mohan, director and practice head of the mobility and cloud quality engineering practice at outsourcer Wipro. If you don't know how other products perform, you won't know if someone has set a high bar that must be cleared.

Although such issues may seem like design issues, they are also part of testing.

QA becomes the referee

A subtle but important concept is that of the referee. What you want mobile testing to determine is not simply whether an app performs as it was designed to do, but whether the performance and design honor the original intent.

Manouchehri calls the QA staff the line judges. "The product owner is a judge," he said. "But who's going to come to the judge and say if the foot was in bounds or did the player step outside?"

The product owner is a judge. But QA staff are the referees.

— Ali Manouchehri, CEO at MetroStar Systems

Stepping outside the line means the app doesn't meet critical metrics. "Our metric here is this app needs to load in five seconds or less, and the animation the designers want to put in there pushes it to six seconds," Manouchehri said. "The data needs to load up in three seconds, and now it loads in five seconds." The QA person becomes a referee for the compromises between design and development.

Integrate the team

According to a number of experts, one of most important steps in building a testing practice is to integrate testers and QA into the full development and design team. "QA should be involved at the very beginning of any life-cycle process," said Domenic Sorace, QA manager at digital marketing agency Carrot Creative. He prefers that his staff be part of the kickoff, UI/UX, wireframe, and requirements meetings.

QA should be involved at the very beginning of any life-cycle process.

— Domenic Sorace, QA manager at digital marketing agency Carrot Creative

The latter is where his company matches initial requirements to the user interface and sees how the app functions. "That's where they can start to form different user stories, different test cases," said Sorace. "After the requirements meeting, we have the design meeting, and then things go straight into development."

Even during development, QA keeps in close contact with the developers. Not only do testers get a better sense of what they need to do, but they can start much earlier in the process, making the testing work more efficiently.


If you are trying to handle testing through manual means, you should stop now. There are thousands of potential combinations of device, operating system version, carrier, network conditions, app mix loaded onto a device, and software running in the background. No organization can manage that complexity with manual processes.

Journyx has software for tracking time and expense with mobile apps that act as data entry mechanisms for the server-based software. After bringing mobile development in-house from third parties, it became clear that trying to get strong test coverage was tough. "We had to squeeze [testing] in," said John Maddalozzo, vice president of engineering, because of all the other development responsibilities engineering had. "That was obviously not scalable."

We had to squeeze testing in. That was obviously not scalable.

— John Maddalozzo, vice president of engineering, Journyx

Journyx has started to work with automated testing using a third-party cloud service that makes available for testing a wide variety of devices and software configurations. "It's hundreds of devices hooked up so you can write automated test cases."

But automation has to cover more than functional issues. User experience and interfaces, vital to app success, can behave differently when use configurations change.

Companies need video and frame shots of the results to round out the testing. "You can play back the same script on all the devices and get your video and frame shots and do your analysis afterward," said Petr Kartashov, head of QA practice for software consulting firm EPAM Systems.

Get your video and frame shots and do your analysis afterward.

— Petr Kartashov, head of QA practice, EPAM Systems

Budget time for testing

Video review, important as it is, offers another complication. Someone must actually watch the footage, meaning potentially more time and people. So the simpler you keep the app, the shorter amount of video per device there is to review.

The demands of proper testing can seem overwhelming, but budget the time into the development and release schedules. That may seem impossible, especially with short schedules for incremental releases. And yet risks of inadequate testing are unacceptably high from a strategic business view.

Use theory for better coverage

Proper use of testing theory can help reduce the number of test cases and the time necessary to run them. Rogan Creswick, a research lead at R&D organization Galois Inc., is an advocate for incorporating static analysis to characterize the behavior of code without actually running it.

"It doesn't tell you how that application will behave in different environments," Creswick said. False positives and negatives, particularly the latter, can be problems as well. "You're invariably going to get a checklist of things you need to do, and if some of the things on that checklist are wrong, that's not good for the developer," he said. However, the real problems that you do find with static analysis remove them before unit testing, saving time.

Finding problems with static analysis saves time during unit testing.

— Rogan Creswick, research lead, Galois Incorporated

At the other end of the spectrum, Chinthi Weerasinghe, vice president and global head of the QA practice at IT consulting firm Virtusa, emphasizes optimizing test cases and coverage. "You need a smart way to maximize the coverage," she said.

You need a smart way to maximize coverage.

— Chinthi Weerasinghe, vice president and global head of QA practice, Virtusa

An example is pairwise testing. "It's a scientific way of selecting the optimal scenarios. It's not possible to get 100 percent coverage with 100 percent accuracy. It's our budget and our time." The more effectively you can create test cases, the more control you have over budget and time.

Close the loop

Testing isn't over when the last bug is fixed in-house. Mobile apps have often been released by companies looking for feedback, which can turn into a marketing disaster. The need would be better served by beta testing, a concept that too many ignore because they assume that releasing an app means releasing it to everyone.

"The challenge that developers face in the mobile industry is, thanks to consumer apps being wildly popular, end users have extremely high expectations around app usability," said Mark Lorion, chief marketing and product officer of app deployment and management vendor Apperian. "You've got to find a way to collect widespread feedback on the usability of the app."

You need to collect widespread feedback on usability.

Mark Lorion, chief marketing and product officer, Apperian

That will mean setting up private app stores or using something like Apple's TestFlight facility for iOS. "You've got to reinvent the way you put it out there and get feedback. If you rely only on employees to test, they'll probably test it during the day on a fast Wi-Fi connection with good bandwidth and a charged battery."

Finally, when the app is out, monitoring user feedback to help drive future testing, bug fixes, and design closes the loop. Then you're ready to start all over again for the next version.

Keep learning

Read more articles about: App Dev & TestingTesting