Micro Focus is now part of OpenText. Learn more >

You are here

You are here

10 totally avoidable performance testing mistakes

public://pictures/Benny-Friedman-R&D-Senior-Director-HP.png
Benny Friedman Director, Flipkart
 

Applications are evolving at a blistering pace, and today's users expect equally fast performance. Nearly half (46%) of users say they will abandon an app if doesn't load in three seconds or less. If your apps have the right look and feel but don't live up to expectations as they scale, users will simply dump your apps and switch to a competitor.

The only way to avoid this fate is through performance engineering, which introduces performance testing into the development process at the earliest opportunity. In agile organizations, software developers no longer perform a full analysis of an application before coding starts, but that doesn't mean you can leave performance considerations and metrics until the end of the process. You can't just add a performance layer if the application doesn't perform well enough.

The 10 most common performance engineering mistakes

Performance testing is an important component of performance engineering. Unfortunately, not everyone executes performance testing processes correctly. Here are 10 of the most common mistakes organizations make—and how you can avoid them.

1. Failing to test for performance

You're delivering software as a service (SaaS). You've chosen the best new platforms and technologies, and you have a world-class development team. Why would you bother thinking about performance engineering? Surely your application will perform. You don't need to design or test for performance. Right?

Yes, your first version will probably perform well, and yes, it'll work best if you're connected to your corporate LAN. But what if...?

You can do some testing on your production system, and if you have an infrastructure like Facebook or Google, you can expose your changes to a subset of users and validate their user experience. You can always roll back a change, or roll it out to the rest of the user population if you're happy with it. But if you don't have sophisticated DevOps procedures and tools, your users may experience application slowness, or even downtime, on the day you launch a new release. Performance testing in pre-production is a must.

2. Having no methodology

If you write down a list of groceries to buy, you'll finish your shopping a lot faster. Why would you be any less prepared in performance engineering? Define the list of performance-related activities you want to accomplish before you launch the first release to your users, and pay attention to the desired outcome. Define owners for each activity, and make sure any problems you find get fixed promptly.

3. Neglecting to define KPIs

Key performance indicators (KPIs) in performance engineering define thresholds for metrics you don't want to cross. If the requirements state that a certain page must render within two seconds and it takes three, users will notice and think the application is slow.

It's all about managing expectations. Some pages can be slow by design. If the user is searching for a flight, for example, users expect the process to take 10-20 seconds. But the expectation for sending a text message might be five seconds, and opening a news article might be less than two seconds. That's why it's important to define KPIs before you start testing. If you already have data available, it may bias your opinion of what's considered a well-performing application. A user experience survey should also be part of your process for defining KPIs.

4. Failing to choose a tool

You don't have to pay money for tools. Some organizations use JMeter. Others use automation tools that can give you high-scale load testing for SaaS or on-premise applications, while also providing insights into client-side performance and server-side troubleshooting. Choose one or the other. It's possible to achieve 100% manual testing for performance, but that approach is much more expensive and offers little insight into how the application will perform in production or under specific network conditions and workloads.

A load tester is not the only tool you need. If you have legacy components in your application, you may want to virtualize these when testing. Your IT department won't be too happy with you if you stress the SAP system or the mainframe.

Finally, if your users are mobile, you may need to test on both real devices and virtual ones. In most cases, load testing for mobile apps is done by combining the two. Testers load the server with virtualized mobile users while testing for user experience on real devices. By using tools that run many virtual users, you can simulate mobile users and emulate their network traffic. You can also use tools that emulate the network conditions of mobile users. These tools offer a better simulation of real conditions, because mobile users on different network lines can cause different server behaviors due to high latency, packet loss, and low bandwidth.

5. Testing for performance at the end of the development cycle

Some people schedule performance testing at the end of the life cycle, assuming they can't test before the complete application is available. That is so wrong. If your continuous integration (CI) system already has some automated tests in it, you can start automated performance testing. Running performance tests should be part of your functional testing cycle.

When testing each build, you should also test for performance, and reuse the functional tests for performance testing if possible. You might want to have a few test configurations, just as with functional user interface (UI) tests. Have a short one to run for every build, a more thorough one to run nightly, and a longer, more comprehensive one to run at the end of each sprint. If your application relies on services that are expensive to use during testing, or if they're not available at the time you need to test your application, use a service emulation tool to simulate them. This lets you test the performance of your core software as if it was consuming the actual services.

6. Testing only over the LAN

If people will use your application only over the corporate LAN, testing only over the LAN is no problem. But if people are running your app on a 3G network, your testing will have been for naught. Let's say your 3G users experience 30 milliseconds of latency. What will be the response-time delay for a given transaction on a 3G network compared with the LAN?

If you answered 30 milliseconds, you've never conducted tests like this. A mere 30 milliseconds of latency at medium load on a web server can result in a jaw-dropping 10-20 second delay in response time. This is the result of a vicious cycle where the server opens even more ports to serve more users, which eventually chokes the server. Network emulation tools let you define the network condition profiles that you attach to different virtual user groups representing your user base distribution.

7. Looking only for server crashes

Everyone wants to avoid system downtime. You need to stress your system to see how much it can scale, adding more hardware as necessary. If you're looking for the best user experience, there are things you need to check other than possible server crashes. The client-side user experience may be different than what you measure on the server side or even in the network layer from the client side. JavaScript code that runs on the client may result in a very different user experience than what you get from traditional testing tools. That's why you need to mix high-scale virtual users to load the server and a few real users to check the client-side experience.

Another thing to consider is a functional test while in load. It may be that the server is stressed but capable of handling the load; however, from the client side, it looks like the application is throwing a bunch of errors.

Longevity tests are also important. You'll only find out if your application is eating client-side or server-side memory when you carry out long-running tests. While you run long tests, it's important to monitor the system resource usage so you can spot issues, even if there are no crashes.

8. Analyzing results at the end of long tests

Most short tests are unattended, running as part of a CI cycle. If the tests fail due to environmental issues or other false positive errors, especially if you run them on-premises, the cost isn't significant. But in the case of long tests—particularly if they run in the cloud—the cost of a failure may be huge in terms of both wasted time and money. It's a best practice to attend and observe the live monitoring of the test as it runs. While the test is running, look for anomalies that may indicate something is wrong with the environment or possibly the scripts. As soon as you find an issue, you can stop the test and fix it. Some tools also offer rich monitoring capabilities that let you correlate between different charts presenting live data, so you can modify the test while it's running and make it more accurate.

9. Analyzing each test by itself

Analyze each test's results for anomalies and errors, and compare the metrics to the KPIs. However, if you ignore the results of previous test runs, you won't be able to identify a trend in certain behaviors of the application.

You may have a measurement that is below the KPI threshold but is growing steadily from build to build. This indicates that someone is doing development work somewhere in this area, causing performance degradation. If you catch this early, you'll be able to avoid a KPI violation in the future.

10. Ignoring production data

If you already have a production system running, why not use the data from production to ensure your tests track closer to production scenarios? You can discover plenty of useful information from production data, such as the distribution of the population by geography, device, browser, network condition, and so on. You can also learn which set of users run the different business transactions. Some tools also allow you to create tests and test data from production log files.

Avoiding these mistakes is not a guarantee that users of your app will have an awesome experience every time they use it. But if you avoid these mistakes, you'll be well positioned to deliver a great user experience.

Keep learning

Read more articles about: App Dev & TestingApp Dev