You are here

You are here

Mobile testing madness: How to stop the Web services insanity

Matthew Heusser Managing Consultant, Excelon Development

Mobile fragmentation means you can't write your app just once. To reach a broad market, you have to write separate versions, for iOS and Android, phones and tablets, and so on.

To avoid writing the whole application three or four times, developers can separate the user interface from the back end and expose the back end as Web services. Once that's done, a little user interface code is all it takes to "rewrite" the application for a watch, the desktop, or whatever else you care about.

Of course, you still have to test each version of your app, and that's a problem. Testing through the user interface is slow and expensive.

It doesn't have to be. In some cases you can skip testing the user interface entirely and only test the business logic instead. To show how you can simplify testing, this article will:

  • Describe web services in detail
  • Show how to call and debug web services, even on native mobile applications
  • Explain how to create tools to check services for regression

What are web services, exactly?

Let's start with a working example from my former employer Socialtext, a provider of enterprise social software. Try dropping this URL into your browser:

This query returns a list of all pages on the help workspace that contain the word "page." The text is encoded in a format called JavaScript Object Notation (JSON), so it can be read and displayed by the user interface code. If you want to see something else, change "accept=json" at the end of the URL to "accept=html" or "accept=text."

You'll notice that the command runs through a browser, just like a web page. The link above runs over port 80, just like any other web page. If the programmers on your team use services, you can change the values and refresh to explore.

But you want more than that.

Curl is a command-line utility that takes in a URL and sends the results to the command line. Curl is built into Linux and Mac OS X, or you can download curl for Windows for free. Once installed, the command is as simple as:

curl -s -H "Accept: application/json"

After we've called curl once, it's easy enough to write a little script to read in a set of URLs and then check the response against your expected results. I did it and checked it into GitHub in about an hour. There are plenty of tools to do this, which we can talk about in a bit.

The examples above are simple, because they don't deal with authentication. Curl won't work if the website sits behind a login wall, unless you already have some special data. It also just gives you the result of the web service call; it doesn't tell you if the front-end code called the service incorrectly.

So let's talk about authentication and debugging.

Authentication and debugging

Let's try this from the command line:

curl -s -H "Accept: application/json"

Ideally this returns a JSON blob of people search results, but the results should vary depending on who is logged in; you should see results only for users in your own account!

From the command line, you'll see "The person you are trying to view is not visible or does not exist." From a browser, an authentication dialogue pops up.

It is possible to do basic authentication in curl, but the website uses a secure connection ("https"), which must be encrypted. It is possible to download the certificates and use https in curl, but that's a lot of work. (Python's requests library can do a great deal of that work for you.)

The point here is to create a secure way, outside of the mobile device, to call the URLs that our mobile application will be calling. That way, when a search for "matthew" fails on the phone, we can try the same URL by hand. If the results are garbled, we know the problem is with the REST endpoint; if the results are correct, the problem is in the front end.

On a desktop browser, we can use the developer tools to see the network traffic, what URL was called, and the results. For example, here is the developer view of a page search for "test":

Developer tools

It turns out there are tools to hook your Android or iOS device up to your laptop and sync Chrome or Safari to your mobile device, so you can debug on the laptop. That works fine for mobile web pages, but what about mobile web applications?

There are a handful of ways URLs can get called by native applications:

  • First and easiest, have your developers log all the web service calls they make on the device, and get access to those logs.
  • Second, get access to the server logs where the URLs are called, ideally in a big data search tool that allows you to search by user and time.
  • If both of those fail, you might be able to configure your laptop as a wireless hotspot, connect the mobile device to it, and look at your own logs.

In addition to finding out what APIs your user interface calls, you might want to send bad REST data back to your device, to simulate timeouts, errors, and corrupt results. One way to do this is to have the endpoints base URL set by a configuration file. Set the configuration file to a service virtualization tool, then set up the tool to return "bad" results for specific calls.

At this point, we can test our REST API on the server side, we can see what the mobile application is calling, and send results.

Now let's talk about doing it all the time, automatically.

Automation and patterns

The examples above are a bit abstract, so let's use some real example: login, search, tag a user, and tag a page.

The "correct" results depend on three things: The user, the existing state of the application, and the actual command.

Different users could have different results. For most services, we'll log in once, get an authentication token, and run the entire test case as the same user. For login, we'll have a page of login/expected result messages, some of which will be "200 OK," and others of which will be security errors.

When I was at Socialtext, we had a database for our test cases. The test cases were designed to start with a fresh database. If a test case died mid-run or corrupted the database, we could call a cleanup script that deleted everything that test case had created from the database. The script could also run automatically when we ran a suite and one of the test cases failed. That way, if there was some corruption, the whole run wouldn't error. The test cases were designed to run independently and consistently, so you could get a list of failing scenarios and rerun just those.

The basic pattern is:

  • Check out code
  • Perform build
  • Create virtual server
  • Read in the automated checks
  • Loop through them
  • Send the results to somewhere that matters, probably the continuous integration system.

That is a lot to build, but it allows the user interface testing to focus on the UI, not on validating the back-end business logic.

If you can do that, then testing a mobile build might take hours or minutes, not days or weeks. More than that, you can catch bugs fast and have the tools to isolate the problem to front or back end.

So mobile testing might be mad, but it doesn't have to be.

Keep learning