Micro Focus is now part of OpenText. Learn more >

You are here

You are here

3 best practices for container performance testing

public://pictures/davidl.jpg
David Linthicum Chief Cloud Strategy Officer, Deloitte Consulting
 

As we build more container-based applications, the notion of testing begins to come up as well. As you know, there are many dimensions to testing, including stability, usability, and security.

The concept of performance testing for containers seems to be the bigger nut to crack. Here are the key reasons:

  • Container-enabled applications are typically complex, distributed systems, thus testing becomes complex and distributed as well.
  • In many instances, we port traditional applications to containers, without refactors. This means the existing performance issues may be transferred to the container-enabled version, and must be discovered and corrected before placing the container-enabled application into production. 
  • In many cases, container cluster managers (such as Google Kubernetes) are leveraged to provide better performance, but the results can vary a great deal, depending upon design and implementation. 
  • Security and governance are often embedded in the container-enabled applications, and thus can affect performance of that application, depending upon how security and governance is implemented.

The concept of container performance testing is now front and center, considering how many organizations are moving to containers. The value of containers is portability, even the ability to scale.  However, we’re finding that, just like so many past development approaches and enabling technologies, there are many tradeoffs to consider.

Here are three best practices that will help you work around those tradeoffs as you pursue container performance testing.  

1. Take a service-level approach to performance testing 

Just a quick review:

  • The Linux kernel, which is in the container, allows for resource isolation (CPU, memory, I/O, network, etc.) and does not require starting any virtual machines. Docker extends a common container format called Linux Containers (LXC) with a high-level API that provides a lightweight virtualization solution that runs processes in isolation.
  • While workloads can certainly be placed in virtual machines, the use of containers is a much better approach and should have a much better chance of success, as cloud computing moves from simple to complex and distributed architectures. 
  • The ability to provide lightweight platform abstraction within the container, without using virtualization, is much more efficient for creating workload bundles that are transportable from cloud to cloud. 

Within the world of containers, services, or microservices (we’ll just call them services for the purpose of this article), are the building blocks. Containers typically expose these services/APIs as the way in which you access either data or behavior of the application running within the containers, or a group of containers.

We recommend that you take a service-oriented approach so you can boil performance testing down to a standard process. The use of services as a standard access mechanism to test the performance of container behavior performance, network/communication performance, and data production and consumption performance, etc., means you don’t have to learn how to go to every containerized sub-system. Instead, just invoke a single interface that is consistent (hopefully) from container to container, and that provides a nice abstraction from the complexities within and outside of the containers.

However, it’s not going be right for all container performance testing approaches. The way in which you design and deploy containerized applications will largely determine which approach you’ll take to container performance testing. However, the service-based approach to container testing should work for most containerized applications.

Also, keep in mind that services are an architecture as well as a mechanism. It’s an architectural pattern in which complex applications are composed of small, independent processes that communicate with one another using language-agnostic APIs. This is service-oriented computing, at its essence, decomposing the legacy application down to the functional primitive and building it as sets of services that can be leveraged by other applications, or the application itself. 

2. Test services independently

Services are not complete applications or systems, and must be performance-tested as such. They are a small part of an application or containerized application. However, they are not subsystems; they are small parts of subsystems as well. Thus, you need to test them with a high degree of independence, meaning that the services are both able to properly function by themselves, as well as function as part of a cohesive system. If a service tests poorly for performance, it’s likely to slow down the systems in which that service is attached. One service existing within a container can be called by many other containers or applications, which means a poorly performing service can slow down anything that leverages the service. 

It’s easier to fix the performance issues of a single service exposed by a container than it is to fix the entire application. This fine-grained approach means that you test more and smaller components (services), and that will allow you to isolate and fix problems.  This should save time considering that we’re solving the same performance problem around a single service that may be called by other services, containers, and applications. 

Services should be performance tested with a high degree of autonomy. They should execute without dependencies, if at all possible, and be tested as independent units of code using a single design pattern that fits within other systems that use many design patterns.  While all services can’t be all things to all containers, it’s important to spend time understanding their foreseeable use and make sure those are built into the performance test cases.

3. Right-size your container performance testing

Don’t focus on too fine-grained or too loose-grained. Focus on the correct granularity for the purpose and use within the container, because the issues related to testing are more along the lines of performance than anything else. Too fine-grained services have a tendency to bog down due to the communications overhead required when dealing with so many services. Too loose-grained and they don’t provide the proper autonomic values to support their reuse. You need to work with the service designer on this one. 

As we began our discussion, abstraction allows access to services from multiple, simultaneous consumers, hiding technology details from the service developer. The use of abstraction is required to get around the many protocols, data access layers, and even security mechanisms that may be in place, thus hiding these very different technologies behind a layer that can emulate a single layer of abstraction. Abstraction is effectively tested by doing, meaning implementing instances and then testing the results. Regression and integration testing is the best approach, from the highest to the lowest layers of abstraction. 

When we build or design services, we need to test for aggregation. Many services will become parts of other services, and thus composite services leveraged by an application, and you must consider that in their design. For instance, a customer validation service may be part of a customer processing service, which is part of the inventory control systems. Aggregations are clusters of services bound together to create a solution, and should be tested holistically through integration testing procedures, perhaps in container clusters. 

So, where to from here?

When considering performance testing services within containers, we have to come to the realization that:

  • It’s almost always black-box testing. We have to approach remote services as assets that we can’t change nor view beyond the core functionality. Thus, we’re not looking into the containers or the code, for the most part. We’re just running benchmarks and observing performance.
  • We often share the container-based service with others, typically other organizations, and even other enterprises. We have to consider that fact when thinking about both performance and security testing, and network latency. 
  • We have to consider service level agreements (SLAs) within the context of the testing we will do. How does the service live up to these SLAs, what are the issues, if any, and how do we return results that will make it easiest to resolve the issues? 

The best way to test container performance is to take a service-based approach. You’re able to leverage new and more traditional service-oriented testing tools. There are hundreds on the market. Finally, you’re leveraging services to remove you from the underlying complexities of testing each component (data access, security, monitoring, etc.).

No matter if you’re looking to leverage an existing service-based performance testing tool or take a more DIY approach, you’ll do performance testing by leveraging benchmarking. Benchmarking allows you to test the performance of the services, thus the containers, by seeing how long it takes the service or containers to complete a certain task. The task could be transactional, such as updating a database over and over again, or it could be long running, such as creating a weekly report that may take hours to run. 

When doing service-based benchmarking, we’ll work at the service, container, and composite level to understand how the services perform by themselves, or how they perform as groups of services or groups of containers. The use of container cluster managers means that we typically need to benchmark the services as exposed by the containers, as well as the container cluster. Or, we can call this operation "performance testing containers in the narrow" (testing a single service) or "performance testing containers in the wide" (testing clusters of containers). 

As time goes on, the number of container and services that exist inside containers will dramatically increase. We’ll also see new types of domains, such as thousands and thousands of container-based services. The world of containers is here. We need to prepare for this complexity with new approaches to architecture, security, and testing; loosely coupled architecture; and better approaches to distributed performance testing. Our containers journey has just begun.

Keep learning

Read more articles about: App Dev & TestingTesting