Left turn sign

Which performance tests are realistic to shift left?

World Quality Report 2018-19: The State of QA and Testing

Agile sprints leave little if any time to accommodate any testing beyond core functional tests. For end users, however, the responsiveness of the app is part of its expected behavior. As applications’ client sides enrich content and functionality, you should rethink performance testing as a combination of single-user performance and the traditional load testing. 

With that approach, it is possible—and easy—to measure key performance indicators such as responsiveness of the user interface, as well as metrics such as CPU, memory, battery, network traffic, and other device vitals, to optimize your app. This pragmatic approach is highly efficient, since it fits well with smoke tests and nightly regression tests.

Recent industry and technology trends have increased coverage requirements for performance testing while presenting new opportunities to provide deeper and earlier insights into application behavior. Those trends include:

  • Digital transformation: Users have fully adopted digital channels offered by brands. It is a streamlined, always-on, efficient, and fun way to conduct tasks or engage in a leisure activity. The user transition is between screens and experience, whether through desktop, tablets, smartphones or IoT devices, or in another dimension, from the brand's own app or website to messaging, chat bots, and so on.
  • Client-side richness: The efficient computing and sensor platforms available in smartphones create opportunities for compelling user flows that are less dependent on back-end functionality. Similarly, on the web, HTML5 offers new opportunities to offer new services.
  • Agile: In an accelerated development release cycle, any test or measured metric that can identify a defect earlier is a blessing. Test automation and stability are very much needed to achieve this.

Given those trends, what's realistic to "shift left" into, for example, your nightly regression test? Significant load testing is something that might be challenging, since the environment is not set up with the resources to handle that much load.

Is this all that's on performance testers' minds? Not at all. Consider the single-user performance test. The thick client (whether HTML5 or a native or hybrid app) will have a significant impact on the end-user experience. And when the environment conditions aren't great, the impact is even larger.

[ Special coverage: PerfGuild performance testing conference ]

 

Figure 1: Environment conditions that affect user experience

The nice thing about single-user performance tests is that you can apply them to existing test cases. They are, in effect, another dimension of the same test. These test cases require a clear definition of the user flows, environmental conditions, and metrics on which you need to report. Environmental conditions may include:

  • Network profiles. 4G that's poor, LTE that's average, or specific packet-level settings (loss, corruption, etc.).
  • Applications running in the background. Sometimes there will be resource conflict or resources such as CPU and memory would be overutilized.
  • Location and movement: Many applications present different content and services based on location and movement.

Many times, testers do not know what to set as those parameters. Luckily, this is something their application analytics can tell them. In addition, marketing could tell testers about the personas using the app: the business traveler, working mom, student, and so on.

From a reporting and metrics perspective, testers commonly set strict key performance indicators for the responsiveness of the app in terms of time required to log in, view account balances, check out, and so on. The right way to do this is to time the appearance of the visual element in the user interface. Other methods might be inferior in terms of the ability to truly reflect what the user experience would be.

 

Figure 2: App responsiveness has a great impact on user adoption.

In addition, testers commonly want to add reporting for the device vitals during the execution of the script, a log from the device and app, and an unencrypted log of the network traffic from the device/app and the service APIs to which it connects.

 

 

Figure 3: A waterfall chart based on a recorded HTTP Archive format file

Shifting left

One key objective should be to embed these test cases into the cycle such that developers can gain early insight into performance degradation they've introduced in recent code commits. Luckily, single-user performance test cases are easy to add to the test suite. One approach is to use one of the personas mentioned earlier (the student, business person, etc.) to define the conditions and the responsiveness measurement point. The right lab should provide relevant logs and data points to enable the developer to take action to optimize the app behavior. Again, you don't need new test cases; you can just make a simple one-line addition to your existing ones.

 

 

 

Figure 4: A persona (“Georgia” in this case) can establish the single-user persona for the test. 

 

Shift left and watch app adoption rates rise

The practice of performance testing has evolved, with new screens, approaches, and use of open-source tools. So performance engineers must reinvent themselves to stay relevant and contribute value. By delivering performance indicators to developers early in the cycle, you can have a big impact on the end-user adoption of your apps. 

Would you like to know more about shifting performance testing left? Join me at the Performance Guild online performance testing conference for my live presentation on the subject. Registered attendees can also view my presentation and those of other presenters anytime after the event.

Topics: Dev & Test