Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to define the right workload model for your performance tests

public://webform/writeforus/profile-pictures/me_6.jpg
Lior Avni Customer Facing Engineer, Micro Focus
 

Defining a workload model for an application under test (AUT) might just be the most challenging task a performance engineer can face. To be successful, you've got to research both the application and the business model it is meant to serve.

Whether you're a new or aspiring performance engineer or just want to brush up on best practices, these tips, based on my experience, will help you to create a viable workload model, as well as to understand the different stages in that process.

Getting started: Key performance challenges

Before running a performance test, you need to model your production workload accurately, set up the test environment and equipment, establish a benchmark baseline for your tests, and so on.

An inaccurate workload model can lead to misguided optimization efforts in your production system, delayed system deployment, outright failures, and an inability to meet service-level agreements (SLA) for the system. Having the right workload model is crucial for the reliable deployment of any system intended to support a large number of users in a production environment.

To achieve a viable model you need to:

  • Proactively monitor user and system activity and performance in your production environment.
  • Identify symptoms of failure, including longer-than-acceptable response times (a reasonable SLA per your system dictates), application errors and unhandled exceptions, and system crashes.
  • Create an accurate simulation of all use cases for your system.

Once you've done that you’re ready to start the workload modeling process, which consists of three steps: workload characterization, validation, and calibration. Here’s what you need to know about each.

Workload characterization

This is about defining your business process. You need to know what your AUT does and how it does it, which buttons to push, what outcome to expect, and what error messages should occur if you press the wrong button.

Define key transactions

Break down the business process into the key transactions, define the volume and concurrency, and map those transactions to the proper business processes (in plural, since an AUT should support many different processes).

Define the distribution

Set an approximate distribution as per definitions supplied by the system architect and product manager, and define the proper ramp-up of users accordingly. If you are running in a multi-tier application (client/web server/app server/database), take that into account when creating your workload model, since each layer may support a different number of users and require a different set of monitoring key performance indicators (request type, concurrency, rate, CPU, memory, IOPS, and so on).

Remember these simple guidelines for your time frame:

  • Monitor always, but pay special attention during high load times.
  • Use a large variety of monitors, both from the application and third parties, while monitoring each part of the system.

Design your script

First things first: Understand your AUT. Sit down with the product manager, learn the use of the application, and start researching the web server log files to see top hits and actions based on browsing statistics.

Once you understand every aspect of the AUT, it is time to design the script:

  • Define the number of actions per script. I recommend dividing the script into several actions for better post-script analysis and monitoring of different stages of the run.
  • Define data-driven parameters based on different data sources and the application log files you reviewed earlier in the research phase.
  • Define in advance the transactions you’d like to measure as part of your post-script analysis. This step, which will form the basis of the pass/fail criteria for your test, requires a deep understanding of the application and all of its usage possibilities.

Understand the business process

A business process is a set of actions (in this case transactions) that, when put together, compose a specific use of the AUT. When performance engineers create a load test, they must first define, in writing, the recorded business process that will form the foundation of the load test definition and analysis.

Once you've done that, you can start sizing your test. First, decide the following:

  • What is the peak number of concurrent connections your AUT must support?
  • What is the desired peak request rate from the AUT?
  • What is the peak number of concurrent sessions your AUT must support?

Now you're ready to start creating your script and the test that will execute that script.

This is your last step before validation. Create your test scenario by adding the needed scripts, and defining the ramp-up, duration, and teardown rate of your users. Then start your engines.

Workload validation

The performance engineer’s job does not end with scripting and planning, but continues into the test validation phase.

The levels of validation you need include:

  • End-user activity: Are you seeing the needed distribution of requests? Is the system experiencing the needed volume of traffic? You make this decision based on the number of transactions per second and hits per second.
  • System resource utilization: Are you seeing the same resource utilization for CPU, memory, number of processes, and so on for each test iteration? 
  • Is there a bottleneck somewhere? What's happening with the database server, application server, web server, and so on?

Validate all of the above using your online and offline monitors, both in the controller (during runtime) and the analysis (offline analysis) phases.

Workload calibration

Perform this last step after you’ve made a few test runs. You'll need it if you can’t validate your workload at multiple layers, or if the behavior is different in the production environment.

This is where the fun begins. Analyze your log files, identify missing requests or transactions, adjust the volume of your users and the distribution, and try again.

Find the baseline and keep it as your benchmark. Always compare with your next runs, and keep track of the areas that need tweaking.

Now go forth and model

Performance engineers live to break systems, re-animate them, and break them again and again until they know those systems' thresholds.

 

But before you can break a system you need to learn its anatomy. Understand it, consult with your mentor if need be (don't even think of starting this without proper mentorship), and learn the technology if you don't know it yet. Then, when all of the details are clear, you'll be ready to start the process.

 

Are you ready to get started? I look forward to your questions and comments. 

 

Keep learning

Read more articles about: App Dev & TestingApp Dev