Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Top performance engineering trends: 5 things your team needs to know

public://pictures/Matthew-Heusser-Managing-Consultant-Excelon-Development.jpg
Matthew Heusser Managing Consultant, Excelon Development
 

Performance engineering is broadening, and deepening, in both scale and scope. 

Emerging techniques in performance engineering promise more responsive systems, in less time, for less risk and impact. But there are some key issues to be aware of, say five experts who recently discussed the state of performance engineering at a recent roundtable.

The panel, sponsored by Micro Focus, included moderator Richard Bishop, lead quality engineer at Lloyds Banking Group; Paul McLean, a performance consultant at RPM Solutions; Wilson Mar, a performance architect at McKinsey; Ryan Foulk, president and founder of Foulk Consulting; and Scott Moore, senior performance engineering consultant at Scott Moore Consulting.

Here are the key trends and issues these top experts see changing the game—and what your team needs to know about them.

Massive scalability changes things

Auto-scaling sounds like a wonderful feature; a cluster can simply add servers when demand reaches a certain predefined level. That changes the nature of performance engineering work, said RPM Solutions' McLean.

The question itself also changes. "Can the servers handle 500 transactions per second?" becomes "How do the servers handle a doubling in workload?" he said.

There's a "spin-up" period for new servers, said Lloyds Banking Group's Bishop. It might be 15 minutes between when a trip-wire warns that the cluster needs a new web server and when that server actually comes online. That delay could cause a lag in performance, or even overload, that could be noticeable to the customer.

For that matter, human experts need to set the boundaries for auto-scaling. At what amount of CPU, memory, disk, or bandwidth does the cluster need to add capacity? Cloud compute charges are generally by the hour, so if those boundaries are too low, the company will wind up renting capacity it doesn't need.

If the indicators are set too high, that leads to the problems of lag and overload that Bishop identified. McLean also suggested companies monitor for scale-down. That is, after a spike in traffic, the number of servers should go down. If it does not, the organization will be continually paying for renting the largest number of servers that it has ever needed in the cloud, all the time—defeating the purpose of auto-scaling.

Globalization will rebalance the equation

McKinsey's Mar pointed out some different issues: The rise of a global workforce, and computers that can reach farther afield due to technology advances, both of which will change performance engineering as a discipline

Due to the global pandemic, many companies allowed their employees to work from home, or, really, from anywhere with Internet service and power. Enough workers took this as an opportunity that going back to the office is becoming problematic. As a result, many companies are moving to remote-first hiring, meaning that more people will increasingly access corporate computer resources from farther away.

Today, Mar sees performance testing as something that happens mostly inside the data center. But with new satellite and other types of communications services, it will be possible to simulate true end-to-end loads from anywhere, to move workloads out of the enterprise with fog computing, to allow the Internet of things to proliferate, and to see more streaming video all over the world.

As bandwidth increases, Jevon's Law explains that people will use more of it. As a result, programmers will create more complex applications (because downloading a big website with a lot of APIs is suddenly less of a big deal) and customers will choose to do things that require more bandwidth.

Mar said that performance testers need to be prepared for these changes. Foulk Consulting's Foulk suggested that humans need to consider, and create, better nonfunctional requirements to anticipate these needs.

All of this changes performance engineering from a reactive role to a predictive one. Lloyds Banking Group's Bishop said that while the tools for predictive analytics are starting to appear, too often companies still just throw the software over the wall and hope for the best.

Speaking of more, all the panelists agreed that the pace of software delivery is increasing, and that performance has the chance to be inside a tight improvement-feedback loop. One way to get there is through the continuous integration pipeline

Add performance to the CI/CD pipeline

The panelists agreed on the potential value of including performance testing in your continuous integration/continuous delivery pipeline. Learning how things are going immediately after a change is introduced makes debugging and fixing problems much easier. 

The speed of adoption of technologies is increasing across the board, said Scott Moore Consulting's Moore. As an example, virtualization technology took about a decade to become mainstream, while container adoption took half that long.

Extrapolating, today people are experimenting with AI and ML in performance engineering, and in putting performance testing into the CI/CD pipeline. Expect those technologies to become standard sooner rather than later.

While containers may be late-mainstream for use in development, they have not yet taken off for testing, especially performance testing,  Moore said. All panelists agreed that there are challenges getting performance testing into the pipeline, in building environments, ramping up the data, ramping up demand, and doing meaningful analysis inside of a tight CI/CD loop.

Moore speculated that a test environment running in containers, or Kubernetes, might be easier to create and run. The real challenge might be getting a test run and meaningful results in five or 10 years. 

Learn what you need to, now

"I keep hearing 'CI/CD' from every client, but it's just because they want to go faster," Moore said. We're getting close, he added, to being able to spin up the environment needed faster, have the test and scenarios prepared, kick it all off, spit out the results, have an algorithm read the results, and explain what to do about it.  

"The leading companies are doing this now. If you're not studying how to do that, this is the time to be learning," Moore added.

But RPM Solutions' McLean said that might be harder to do than it sounds, especially for companies with limited resources. Getting a large amount of data, a large enough test system, getting all the data preparation set up, and all the tests run, ramped up, and torn down—all within a few minutes—is a massive change.

This is especially true when compared to the multi-day setup and test runs that many companies are using now.

Still, getting the right tests to auto-run in a tight enough timeline may be the next challenge. That could be daunting, with people sometimes choosing to "punt" with short tests, often because they feel unsafe admitting reality.

Create a safe environment

All the data in the world won't make a difference unless someone grabs it, points to it, and explains what the data means and why the organization should do something with it.

McKinsey's Mar pointed to a recent study by his firm suggesting that the largest driver of performance in organizations is not agile, DevOps, CI, or CD, but psychological safety. Groups where people felt safe to point out problems or failures and offer solutions are the only environments where new ideas have the potential to see the light of day.

Mar suggested that one new approach is to put performance testing to work on the team itself, not as a task done by some external group as a checklist item. That makes it performance engineering more than just testing. This allows the metrics to be things the team is interested in for improvement, not a sort of external report card.

Final lessons

There are two opposing forces in performance engineering. The systems are becoming more complex and thus require increasingly intricate tools to drive and analyze performance issues—yet the human element is the main difference between success and failure.

Another trend is the value of feedback throughout the process—not just for informing about the performance of one particular release, but also for learning what customers are doing to provide information about what to build next. Finally, there is a gulf between the state of practice and what might be possible. 

It's clear that organizations will need to leverage performance engineering to create a better experience for the customer. 

Keep learning

Read more articles about: App Dev & TestingTesting