You are here

How to push testing into development with real user monitoring

public://pictures/Shane-Evans .png
Shane Evans, Senior Product Manager, HP LoadRunner, HP

In continuous integration (CI) environments, it makes sense to push testing into development. But how to configure these tests? Two common approaches are to either run the easiest and most obvious tests, or to predict and test against expected user flows.

If you go for the easy option, you are likely to miss important flaws. Sure, short design cycles mean you can fix issues quickly, but this isn't good enough. Create a bad user experience, even briefly, and you risk long-term damage to your reputation, revenue, etc.

The alternative is to work with business analysts to predict user behavior, and test these flows. The problem is that business analysts are often wrong. (I can't blame them; users are hard to predict!)

I propose a third approach: Monitor user behavior in production, and use this data to automatically construct development testing. Here's how it works.

[ Get Report: Gartner: Use Adaptive Release Governance to Remove DevOps Constraints ]

Beyond user behavior monitoring

IT operations teams have measured user behavior via real user monitoring (RUM) for at least a decade, but this process was kept separate from testing and development. RUM data was mainly used to create incidents and troubleshoot issues, following ITIL processes. Meanwhile, dev teams used their own tools and processes.

As DevOps breaks down the barriers between teams, it no longer makes sense for each team to use separate tools and processes. Much better to take user behavior data from production and feed it into the test environment. In principle, this can be done manually, but given the amount of work involved, this requires automated tooling to be effective.

By automatically piping real user behavior into an automated test environment, dev teams get a more consistent, repeatable, and scalable testing process. The potential benefits to application quality are significant. Once you know what users are doing?as opposed to making assumptions that often prove faulty--you can focus your testing efforts on the areas that matter most.

Ideally, the tool should capture rich metadata about users: where they are coming from; what device, operating system, and browser they are using; their network conditions; and demographic data. Mining this metadata allows developers to build an automated test suite that accurately mimics users.

These insights can also be transplanted to other contexts. For example, new products can leverage this data to get a head start on testing, rather than waiting for several iterations to gather this important feedback.

[ Get Report: The Top 20 Continuous Application Performance Management Companies ]

How to automate testing

When deciding what to automate, the key is to build an automation library that serves a greater purpose. In my view, continuous delivery is not the end goal. Instead, the focus should be on delivering the highest-quality user experience, with each and every release.

Performance should be at the forefront of your thinking about user experience. To validate that you are delivering sufficient performance, the tools need to record activity on the most affected user flows or business processes. This must be done in an automated way that facilitates easy reproduction of user behavior. In particular, the tools should have the ability to generate test scripts and a test plan based on the users' metadata.

Keep in mind that CI is an ongoing improvement process. When apps have new releases or there is more data from production, that data should be added to the test suite. The more scenarios you add to test automation, the more complete the picture.

I should note that there are cases where smaller teams?or maybe more stubborn teams?prefer to manually analyze the logs to understand what the users are doing. Some developers like that level of control. But as a company grows, that approach is not scalable. You need to let the data do the work for you.

Make it work

While there are many solutions for monitoring user behavior, few can produce repeatable test automation in an automated way. Here are a few tips:

  • Look for enterprise-grade solutions that can plug into a heterogeneous environment, since it is likely you are using more than one platform for app development and monitoring.
  • You may also be using a variety of technologies to support your application, so a breadth of solutions may need to be supported by your testing platform.
  • Lastly, look for a solution that incorporates all aspects of your production data into your automated test suite, including user behavior, demographics, and network conditions.

Without these capabilities, you aren't going to get a complete picture of performance. Tools are beginning to appear that meet these requirements, but it is early days. I'm excited to see how CI testing will evolve as these tools grow.

[ Get Report: Buyer’s Guide to Software Test Automation Tools ]