Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to embed performance engineering into your pipeline

public://webform/writeforus/profile-pictures/anj.jpg
Anjeneya Dubey Director of Performance Engineering, McGraw-Hill Education
 

As your organization adopts continuous integration (CI) and continuous delivery (CD) to speed up its software development process, are you also including performance evaluations of your applications in the pipeline? You should be.

You want to enable your development teams to move faster, release in smaller batches with confidence, and deliver at a rapid pace—without introducing performance issues.

McGraw-Hill Education has embarked on this journey. It wasn't as easy as it sounded, especially considering the cultural inertia we faced around the importance of performance engineering at the beginning of the software development lifecycle. To address this, the team changed its Scrum processes, built tools that accelerated our goal to shift the performance cycle left, and included performance in our CI/CD pipeline.

Here are the lessons my team learned from that process, and how you can benefit from that experience.

1. Make performance requirements part of your functional requirements

Keeping user experience in mind, based on a philosophy that poor performance is equal to a functional bug, we decided to treat performance requirements as part of our functional requirements. That meant we had to evaluate all stories for performance, and they had to have acceptance criteria with clear performance requirements.

For example, if you are building a new API, clearly state that the API must handle a load of x transactions per second, with a 95th percentile response time of 100 milliseconds. In this way, your teams have a clear definition of performance-ready features, and this helps them to start thinking about performance and scalability needs early on.

2. Include performance in your definition of done

Most agile teams that follow the Scrum process don’t include performance in the sprints, because of project complexity and other priorities. You can address this at the sprint level, and create a culture where you don’t close epics until they satisfy the performance criteria, by including performance in the definition of done. The key driver is to make sure that your teams have enough capacity in their projects to create and execute tests that validate the requirements.

3. Automate test analysis and pass/fail decision

If you want to include performance tests as part of your automated CI/CD pipeline, automate test analysis and pass/fail decision-making as well. Performance tests generate an enormous amount of data, from user metrics to software metrics to infrastructure utilization metrics, and these traditionally require a human skim-through on a build-by-build basis to check for any degradation. Automating this means you are taking the humans out of the analysis. That might sound scary, but it's not that difficult.

Your first step in automating is to know what data you're looking for in each component of your application and underlying infrastructure. You automate the collection of key performance metrics, keep a history of all metrics in a central repository, create insights by comparing the results from different builds, and then decide whether the tests pass or fail based on historical performance as well as your set of SLAs.

4. Reduce time required to prepare and execute tests

In a typical performance testing cycle, testing teams execute a range of performance tests to ensure that they are not introducing a performance and scalability defect. These tests are usually long, both in terms of preparation and execution time. But with CI/CD you rely on short, crisp tests that quickly point out defects and speed up your overall feedback loop.

Having confidence in such tests comes from having a solid benchmarking process, which generates a baseline that you can use for anomaly detection. Scaled-down load and spike tests are preferable; avoid tests for such things as endurance and volume.

5. Keep your test data and environment consistent

The quality of your tests determines the quality of the product. And in order to maintain quality, you must be able to reproduce the same test environment over and over, keeping everything else constant except the one change you want to test.

Thankfully, with teams moving toward infrastructure as code in the cloud, you can use tools such as Puppet, Chef, and Terraform to build your test environment with the same configuration faster and more efficiently. This helps tremendously to keep the environment consistent across the software development lifecycle, and between tests.

You can easily expand and contract these environments based on needs. To keep our test data consistent, we build self-contained tests, creating and destroying data as part of the test as much as we can. The rest we create as part of the environment build-out, and we spin up a parallel database with pre-seeded test data.

6. Scale the load test tool and test environment for a range of tests

Generating thousands of virtual user loads quickly is a big challenge, but you can address it by using commercial cloud testing tools. You can also use JMeter, an open-source alternative that you can containerize through Docker, to run distributed tests in the cloud with one master and any number of slaves to generate a huge load.

You can use this Docker file as part of the infrastructure as code in your CI/CD pipeline. Your JMeter farm gets provisioned as part of the test environment build-out, where your test executes. This saves on licensing costs for tools while setting a faster pace for running your tests.

Want to know more? During my STAREAST 2018 conference session, "Embedding Performance Engineering into the CI/CD Pipeline," I'll offer more tips on how to do each step listed above. The conference starts on April 29 in Orlando, FL, USA. TechBeacon readers can use the promo code SECM to save up to $200 off registration.

Keep learning

Read more articles about: App Dev & TestingTesting