Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Should you write automated UI tests? 4 questions to answer first

public://pictures/rouan.png
Rouan Wilsenach Software Engineer and Senior Technical Leader, Independent Consultant
Neon sign reading
 

Testing (and test automation) at the user interface (UI) level is a long-standing point of controversy in the testing community. UI tests use your application as a user would. At their best, they are the single most comprehensive way of testing your application. At their worst, they cause developers to spend hours every week maintaining tests that never actually stop any bugs from reaching production.

Not sure whether your next feature needs a UI test? Here’s a simple set of questions to help you decide.

1. What happens when the feature breaks?

If the goal of testing is to stop you from breaking your application, then it follows that you should consider the consequences of your new feature breaking.

The best way I’ve found to gauge this is by asking yourself, “What would we lose if it stopped working?” In other words, “What is the impact on the bottom line?”

If you’re running an online shopping site, for example, the impact of the checkout or search functionality breaking would be massive. If no one can find or purchase products, you’re going to lose money. But while it might annoy certain users if the “Wishlist” feature breaks, it’s unlikely to have an immediate impact on your profits. And the page showing the physical address of an online shopping company could probably be broken for months without anyone noticing.

If the feature isn’t critical, there’s no need for a UI test. If it has a high impact on your goals, then you definitely need a way to find out whether it’s broken, but it doesn’t necessarily need to be a test.

2. How quickly would you need to fix it?

Let’s say the feature in question is the FAQ page, which new users visit to learn about your delivery options and charges. If it remained broken for a few weeks, the lack of information would likely lead to a gradual drop-off in conversions from new users. If, however, it got fixed within a day, it’s unlikely to have any impact on the bottom line.

Now ask yourself, “How else can we find out when this breaks?”

Option A: From your users

To some organizations, this option seems preposterous. A bank, for example, would suffer real damage to its reputation if it relied on its customers pointing out issues with its critical systems. But not every company is like a bank. At a conference a couple of years ago, I listened to Kent Beck talk about how Facebook relies on alpha users (including its own employees who use the site at work) to notice issues. If you build a good relationship with your users, this kind of canary release strategy can be a great way of getting feedback early—and not just on bugs.

The downside is that waiting to hear from people can take hours or days.

Option B: Production monitoring

If you need to know within a matter of minutes that something has broken, you need an automated solution. With the right combination of production QA practices, you can set up a system of automated alerts that will let you know the moment your feature breaks or something unexpected happens.

A team with good monitoring will be able to tell within minutes that a user has experienced an error. It can then investigate the error and, if it has a reliable deployment pipeline, release a fix or workaround into production within a few hours.

If you can’t afford to wait a few hours to fix an issue, you’re going to want to do everything you can to stop a bug from reaching production in the first place. (Of course, bugs do still happen, so I’d advise you to put some production monitoring in place all the same.)

If this is the case, you’ve discovered you’re going to need a test, but it might not need to be a UI test.

3. Are there more reliable ways to test this?

UI tests have a well-known shortcoming: they break easily. This happens in one of two ways. Either the structure of the UI changes and the test breaks, or the feature under test is fine, but the test fails for some other reason.

Some people will be tempted to replace UI tests with unit tests, but I’ve found this to be a bad idea in practice because, while unit tests can be helpful, they exist in a bubble. By definition, they don’t test that the parts of your application work together to produce a working feature.

The most reliable alternative I’ve found is testing at the API level (sometimes called “acceptance” or “system” tests). Instead of spinning up a browser or device emulator to use your feature, you write a test that will call your API directly. This way you get the advantage of testing through the majority of your technology stack but avoid having to deal with the quirks of browsers and devices.

If you find yourself thinking that your feature isn’t testable at the API level, it’s probably a sign that your architecture is too complex. In my experience, writing acceptance tests is a helpful nudge in the direction of having simpler applications with better-designed APIs.

Here’s one example. Single-page applications (SPAs) are popular at the moment, because libraries such as React and AngularJS make it possible to create rich experiences in the browser. The more code you have that works only in a browser, however, the more code you have that requires a browser for testing.

Instead of using client-side routing to navigate through a large SPA, use server-side routing to render different pages. Each of these pages can then contain a smaller SPA to provide the rich experience your users need. You can then use server-side rendering (available in both React and Angular) to ensure that the basic content is visible at the API level. You can test the result of a simple HTTP call and check to confirm that crucial content is returned as expected. This approach will not only lead to more reliable tests, but will also make your application more accessible and help you build progressive web apps.

If you find that there is no other way to ensure that your feature works, you’ll need to make the best of a bad situation.

4.  Can I make UI testing less painful?

There are many techniques you can use to try to alleviate some of the discomfort associated with UI testing. Here are a few I’ve found helpful:

  • Most CI servers have plugins that can point out problematic tests, i.e., those that give false negatives often.
  • Make your tests run somewhere your team can see them. I’ve found it very helpful to put screens around the workplace that show the UI tests running, because people notice when something odd is happening.
  • Run only the tests that count using Test Impact Analysis.
  • Try different tools to see which one is the most reliable. There are many automation frameworks, each with its own pros and cons. A good tool for object identification can help.
  • Treat your test code like it’s production code so it’s easier to work with.

Learn more

Hopefully, after asking yourself these four questions, you now have a good idea of whether you should write automated tests for any parts of your UI. There are many more viewpoints than mine out there, so it’s always a good idea to read other experts’ take on the issue as well.

If you need some general advice on which types of tests (not just UI) you should be automating, head over to TechBeacon Learn and read the unit on which tests you should automate and which mobile tests you should automate.

Keep learning

Read more articles about: App Dev & TestingTesting