Micro Focus is now part of OpenText. Learn more >

You are here

You are here

The Mobile Analytics Playbook: 4 proven ways to boost app testing

public://pictures/julian_harty.jpg
Julian Harty PhD Candidate, The Open University
 

In this except from their book, The Mobile Analytics Playbook: A Practical Guide to Better Testing, application testing experts Julian Harty and Antoine Aymer explain four time-tested techniques that mobile development teams can use to improve testing for their apps, and in turn improve app quality.

You can download the entire e-book at the following link:

We have already introduced various concepts and approaches to improve the testing of mobile apps. Through reviewing several hundred research papers and the state of the art in the industry, we have identified four well-accepted and proven ways to boost app testing. They are:

  1. Better testing
  2. Test automation and continuous automated testing
  3. Scaling testing
  4. Static analysis

We will cover these in turn.

1) Better testing

When testers apply better practices and techniques they can test more effectively. Often the concepts seem too simple to work; for instance, using personas and heuristics is not complex or complicated to try. Nonetheless, when people aren’t aware of these techniques or do not apply them their testing can be mediocre. For instance, only 30 percent of screens and 6 percent of the code were exercised by 7 users who tested 28 popular Android apps.

Also, when testing is limited to the lab it lacks the richness, realism, or variety of how the apps are actually used. Furthermore, learning how to understand the information available on the device and the tools that access that information will enable testers to use these rich seams of data.

We want to break things before users do. One way to achieve this is to introduce volatility into the system and environment. We can embrace “disorder, randomness, and impermanence to make systems even better,” where the system is the mobile device, the network connection, and other services the app relies on. The same book makes two key points well worth considering when designing and performing our tests: “How to use continual experimentation and minor failures to make critical adjustments—and discover breakthroughs”, and “How an overreliance on measurement and automation can make systems fragile.” 

TBS

Testing software actually consists of at least three primary activities: Testing, Bug investigation, and Setup (TBS). Time spent on setup and bug investigation effectively reduces the time needed for doing the actual testing. As the TBS figure shows, we want to increase the T and reduce the B and the S.

TBS is one aspect of session-based test management (SBTM), both the work of Jon Bach, a well-recognized guru in software testing.

Recreating sufficient fidelity

Tests don’t necessarily reflect reality, particularly when testing mobile apps. Our environment, device, conditions, experience, test design, and many other factors affect the validity of the results in terms of whether the bugs would be relevant for end users and how completely we can capture problems that would affect these users.

Conversely, as we work to improve the fidelity of our tests and our testing we risk over-investing in time, effort, and money. Therefore, it’s useful and important to find ways to test with sufficient fidelity to find flaws that are particularly relevant to various stakeholders, including the end users.

Improving the setup time and bug investigation can also indirectly help improve the testing and the ability to analyze, and reproduce, what happened.

Improving setup work

Installing apps and configuring devices can be burdensome and time-consuming. We can automate the creation and distribution of test releasesof the app, for instance by using the open-source continuous integration tool Jenkins. Another open-source project, Spoonfocuses on making automated tests easy to distribute, observe, and run.

Some smart teams have also created small software utilities that enable them to change system settings such as locale, Wi-Fi, etc. Android’s open architecture enables these apps to be written, installed, and used more easily than for other mobile platforms that are more closed in terms of what third-party software is permitted to do.

Improving bug investigation

Chasing bugs can be extremely frustrating and time-consuming. Also, critical information can be lost in the communication between the finder of the bug (which may be software) and whoever is trying to understand and possibly recreate the problem. Improving the bug investigation reduces the latency and cost of being able to make informed decisions of what to do about the bug.

There is plenty of information written to the central log on mobile devices. Log gathering, filtering, and processing can enable these logs to be analyzed quickly, accurately, and reliably.

GUI screen recorders, cameras, screenshots, etc. provide useful information on the GUI aspects of an app. There is a helpful, practical article on using various camera and recording software for low-cost usability testing.

We may need to test on several particular devices to home in on specific bugs. In the confluence chapter, we will elaborate on effective ways to select a suitable set of devices to test on.

2) Test automation and continuous automated testing

Test automation is one of the most popular ways of trying to improve testing of mobile apps, and there is a plethora of potentially suitable products and frameworks available. Once the automated tests exist they can be run more frequently than human testers could achieve from a practical perspective.

Also, they can be run when testers aren’t available, for instance when the app is updated overnight and the testers have finished work for the day.

Continuous automated testing is where the tests are run automatically when the source code for an app has been updated and compiled successfully. The automated tests can provide lower-latency, consistent feedback to the developers and therefore enable them to investigate problems that, overall, have been reported sooner than would be practical with interactive testing. They also provide some testing each time the source code is compiled successfully, therefore they provide more traceability and early warning of failures they detect.

There are a couple of additional concepts worth understanding to use test automation more effectively. These include test automation interfaces and test monkeys.

Test automation interfaces

Regardless of the choice of test automation, that automation needs to interact somehow with the app it’s intended to test. There will be at least one test automation interface, possibly several. These may be officially and publicly supported, ad hoc, or a custom interface embedded in the app.

The choice of test automation interface can have a massive effect on the ease and effectiveness of the test automation. For instance, if the interface is informal and reverse engineered by a testing team, then many changes by developers of the app may require emergency changes to the test automation scripts. Sometimes simple techniques such as adding specified labels to key GUI elements can significantly improve the reliability of the test automation as the underlying software changes while also reducing the maintenance effort needed to maintain the current automated tests.

Of the development teams who actively support automated tests, many include a private test automation API built into their mobile app. This API provides access to internal data and often includes commands to interact with the app.

More information is available in an extended article.

Test monkeys

Test monkeys are automated programs that can help test your software. Monkeys are available to test the GUI. For instance Android’s Monkey has been available since very early versions of Android and has helped find many bugs that shouldn’t exist, but do.

Microsoft Research, in particular, has extended the concept of using monkeys to test mobile apps by creating monkeys to generate various responses to web requests that help expose flawed assumptions made by developers (that a web server will always respond without question, for example). By using these network monkeys we can quickly find these flawed assumptions/implementations so that they can be fixed and the app can be made more robust and resilient. The developer may also be able to improve the user experience. For instance, they could include a GUI that enables users to log in to websites that require the user to log in before it will provide the contents.

3) Scaling testing

Scaling testing enables more testing to be done than we would be able to achieve ourselves. There are various approaches, including using remote devices, including other people, in the testing, and running tests on device farms, often in parallel.

Distributed testing

Testing does not have to be local to the development team. In fact, there are several approaches where the testing can be distributed. The first is to remotely access devices elsewhere in the world, often over a web-based connection, for instance to connect to a hosted device in another country. These may support interactive testing and/or remote execution of automated tests, depending on what the hosting platform provides. The second approach is where the testing is delegated to people remotely who test using phones they have available. There are various crowdsourced testing services available where organizations can arrange and pay for remote testing to be performed by trusted testers, who are not employed directly by the organization.

Device farms

Device farms were first launched around 2007 with Nokia’s Remote Device Access service and a commercial offering provided by Mobile Complete. Various companies also had internal, private device farms. Since then there has been steady growth of device farms available for performing remote testing. These include services from XamarinTestdroidand SauceLabswhich changed the focus from hands-on remote testing to running automated tests on more devices in parallel. In 2015 both Amazon and Google launched internet-based test farms that may help to make the services less expensive and more mainstream. Amazon even provides a basic automated test service called fuzz as an option (although it appears to be more of a test monkey service).

Microsoft uses farms of virtual devices to run vast numbers of fully automated exploratory tests for thousands of apps for their mobile platform. The virtual devices enable them to run these tests quickly and very inexpensively. Their tests seek generic bugs that affect the apps rather than bugs related to specific devices.

Device farms can help scale your tests and provide you with access to a wider range of devices than you may have available locally. We predict there will be further acquisitions and developments so device farms can offer more comprehensive, integrated automated testing.

4) Static analysis

Static analysis assesses designs and files rather than running or testing the code. It is a useful complement to all the other forms of testing and can catch problems at the source, rather than once the app has been released.

Design reviews are a static analysis technique and remain useful in finding flaws in mobile apps. Similarly, code reviews, performed by developers who understand the relevant mobile platforms, can catch many bugs before they reach the application’s codebase.

Traditionally, when static analysis is applied to the software, static analysis assesses source code. However, for mobile apps in particular, static analysis is also used to assess generated code. The generated code may need to be extracted and decrypted from the binary application code. The main focus seems to be malware detection, privacy, and other security-related aspects of the app, something to consider when there is low trust of the developers, external libraries, or the development process.

The mobile app may include third-party source code and/or libraries. Consider reviewing them, because they will become an inherent part of the app with the same rights and privileges as the rest of the app. We don’t want the third-party code to adversely affect the user experience or the qualities of the app. In October 2015 Apple removed several hundred apps that included a rogue third-party library that breached privacy and Apple policies.

Facebook provides a free tool called fbinfer that is available for iOS and Android. The development tools (known as SDKs) also include static analysis capabilities. These can often be automatically run after each code check-in to help detect potential flaws and provide feedback before the developer has moved on to something else.

Limitations of these four proven ways

Each of these ways of improving testing helps in isolation, provided the results are used to actually amend and improve the app they help test. These ways can also be combined and, when done well, they help to complement and multiply the benefits.

However, even when projects apply static analysis techniques and tools and combine them with better interactive testing and brilliant test automation, where the testing has been scaled across people in various locations and where the automated tests run on globally distributed farms of devices, the testing will still miss some of what is relevant and useful. In particular, it might not assess how the app is used by the population of end users or how users perceive the app. These gaps mean the app is at risk of failing for end users in ways we are not able to predict and, ultimately, the app will be rejected and abandoned by many of the users we desire. Thankfully new techniques are emerging to help us fill these gaps by using analytics.

Read the rest of The Mobile Analytics Playbook to improve the quality, velocity, and efficiency of your mobile apps by learning how to use integrated mobile analytics and mobile testing.

Keep learning

Read more articles about: App Dev & TestingTesting