Software engineering has changed radically in the era of DevOps and agile, but so have QA testing practices. Here's how one QA expert tackles the challenges of testing in a world of rapid deployment and continuous software delivery pipelines.

Software testing for DevOps: How one QA team sets the pace

The way organizations develop software has changed, and so have the testing challenges. Software engineers now need to deliver in short, rapid cycles; support for mobile devices is exploding; and new automation tools have proliferated. Software testing for DevOps must keep up with business demands for fully automated software delivery pipelines. Today, when a developer checks in code, it can get to production within minutes. Does this mean there's no time to test the code? Not by a long shot.

As quality assurance (QA) teams work with agile, DevOps, TestOps, and lean practices, they face new software testing challenges as they work to keep pace. Fortunately, as development methodologies have evolved, so have testing practices. Here are some of the challenges I encounter in my daily work as a QA team lead and how I overcome them.

Test what's available

In the old world order of software development, where cycles had distinct phases, you had a set time frame to get things done. First, you had development, then stabilization (where QA had time to ensure that product quality was good enough for production), then automation and documentation. In the new world of agile and DevOps, everything happens together in the framework of a sprint to classify user stories as truly "done."

This means that software testing for DevOps must begin as early as possible in the cycle. QA doesn't have the luxury to wait until a feature is complete: you must test whatever code is available to you at any given time. That means your user stories must be well-encapsulated and self-sufficient, so no single story depends on another for testing. You develop features in phases, first working on basic functionality, then adding intermediate capabilities, making tweaks until you reach completion. In this way, QA can provide feedback at each phase, and software engineers can address it before moving on to the next phase. A classic example is the user interface (UI). You usually develop the UI after the API, but QA should test and approve the API for a feature before development on the UI begins.

Test what you can't see

Just because you don't yet have a UI doesn't mean you can't test the API. Even if you already have a UI, it might not be the best way to test the underlying API it uses. First, if you detect a defect, you need to establish that it's in the API and not in the UI. Second, the UI might not provide complete coverage of the API anyway. In many cases, an API can be the product itself, and a UI doesn't even enter the picture. This kind of scenario will rapidly grow in the coming years as the Internet of Things (IoT) takes off, and all that some products will do is emit a beacon to be detected by another product. APIs can and should be tested independently of any UI that may or may not run on top of them.

There are plenty of open source and commercial tools available to help you test and validate APIs before their UI is developed. Some are even convenient plugins that you can run directly from your browser. In some cases, APIs are only intended for internal use by the company's products, but that doesn't exempt you from testing them thoroughly. APIs are critical to the infrastructure of your products, and any defects will be propagated straight out to your customers. Moreover, while an API may start as an internal project, you may eventually want to expose it to partners or customers, or even turn it into a product in its own right.

Conquer the mobile testing nightmare

The explosion of mobile devices has created a testing nightmare. According to the 2015 OpenSignal report, massive fragmentation has resulted in over 24,000 distinct Android devices that all run on different versions of the Android OS, plus the several variations of iOS devices currently in use. You can't test for all of them—but you don't need to. Whittle it down by determining the most common devices your customers use. Even then, however, the list can seem unmanageable.

Just as Carl Linnaeus pioneered classification of the immense diversity of organisms on the planet, you need to classify the relevant devices that you want to support into groups, and then test a representative member of each. By "representative," I mean one that covers the bases for that group, meaning if you're confident your code works well on that device, it will work on all the other devices in the group. How you classify the devices may change between products. Characteristics on which you focus might be screen size and resolution, OS version, or Bluetooth support. Choose based on the device features and characteristics that are critical to your application.

Even after classifying device groups, it's hard to keep up with the rate at which new devices are introduced to the market. To overcome that software testing challenge, consider using a cloud-based mobile testing service. These services remove much of the burden of keeping up with the rate of change in the mobile industry, giving you a quick, relatively easy and affordable solution for your mobile testing.

Use production data to improve quality

With production data and social media, the gold is there for the mining; you just need to know how to dig through the dirt to extract it. Analyze your production data for insights about how customers use your application. It's easy to find out which browsers and mobile devices customers use, but you want to add auditing capabilities to your code. With the right big data analysis tools, you can learn intricate details, such as which flows customers use most, which they abandon midway through, on which screens they spend the most time, what controls they use, and so on. This kind of data is helpful for planning future features and improvements. With old, on-premise software, getting this data ranged from difficult to impossible. In today's world of DevOps, software as a service (SaaS), and the cloud, data is glittering in front of you, ready to be analyzed.

Mind the tweets and posts

Don't forget about social media feedback. Your company no doubt has Facebook and Twitter accounts where it interacts directly with customers. You may also have dedicated support portals, where customers can open tickets, get help from support personnel, and offer direct feedback. These present a wealth of information you can use to improve the product. If you're not doing so, you're already behind: in the World Quality Report 2014-15, 71 percent of respondents said they use and value social media as a source of feedback from their users.

QA is about both finding bugs and improving software quality. Agile and DevOps have created new testing challenges for QA professionals who must meet the requirements for rapid delivery cycles and automated delivery pipelines. Fortunately, the tools and methodologies have evolved along with these disciplines. Agile puts QA and development in the same boat, with both paddling together toward a common goal: "done." Using cloud services to centralize application data for analysis and social media monitoring creates rapid feedback loops for software development. Agile and DevOps may have revved up the pace of development, but with these best practices, QA will not only keep the pace but also help to fuel it.