Micro Focus is now part of OpenText. Learn more >

You are here

You are here

3 reasons why you should always run microservices apps in containers

public://pictures/Bernard Golden Photo 3.png
Bernard Golden CEO, Navica
 

Microservices are the emerging application platform: It is the architecture that will serve as the basis for many applications over the next 10 years. There's good reason for this: The advantages associated with microservices, such as their allowance for agile development and artifacts and an architecture that enables businesses to develop and roll out new digital offerings faster, make it the obvious choice.

Moving to a new application architecture means making some changes. You'll need to change existing practices, as well as many of the surrounding capabilities needed to operate a microservices-based application, such as monitoring, moving state off of the execution environment, and so on. But the biggest unanswered question is this: What execution environment should your microservices applications use? That is, in what kind of environment should they run? 

Runtime options

Fifteen years ago, your only option would have been to install and run microservices on a physical server running an operating system. But that approach would be incredibly wasteful today, given the enormous processing power servers now offer. To get around this, you might consider running multiple services on a single operating system instance, but that runs the risk of having conflicting library versions and application components, never mind the fact that one microservice failure could affect the availability of others. Running your microservice on bare metal is not an attractive option.

The next obvious choice is to divide up a physical server into many virtual servers (a.k.a. virtual machines, or VMs), allowing multiple execution environments to reside on a single server. Virtualization is a mature, well-established technology, most enterprises have already invested in virtual infrastructure, and most cloud providers use VMs as the basis for their infrastructure-as-a-service (IaaS) offerings. But it has serious limitations when it comes to running microservices.

The best choice for running a microservices application architecture is application containers. Containers encapsulate a lightweight runtime environment for your application, presenting a consistent software environment that can follow the application from the developer's desktop to testing to final production deployment, and you can run containers on physical or virtual machines. 

Here's what you get when you move to containers as your foundation: 

Finer-grained execution environments

While VMs make it easy to partition execution environments, using individual VMs for each microservice exacts a heavy cost, because each VM requires its own operating system. To be clear, no application component can be executed without placing it in its own VM.

While it is technically possible to run multiple application components within a single VM, this introduces the risk that components might conflict with one another, leading to application problems. Loading multiple services in a single VM raises the same problem IT might experience when running multiple apps on a single physical server. Avoiding conflicting library or application components and failure cascades is the reason organizations adopted server virtualization in the first place.

Using VMs also imposes a large performance penalty. Every virtual machine, which must run its own execution environment and copy of the operating system, uses up server processing cycles that you otherwise could use to run the applications.

Containers, by contrast, perform execution isolation at the operating system level. Here, a single operating system instance can support multiple containers, each running within its own, separate execution environment. By running multiple components on a single operating system you reduce overhead, freeing up processing power for your application components.

Just from an efficiency perspective, containers are a far better choice for a microservices architecture than are VMs.

Better isolation allows for component cohabitation

So, because containers enable multiple execution environments to exist on a single operating system instance, multiple application components can coexist in a single VM environment. In addition, with Linux, you can use control groups (cgroups) to isolate the complete execution environment for a particular application code set, ensuring that each has a private environment and so cannot affect the operation of other applications.

This ability to isolate frees developers from the need to segregate application code into separate VMs, retrieves the processing power previously devoted to those VMs, and offers it to the application code.

The net result: You get more application processing from a given piece of hardware. The implications of this can be subtle, because application characteristics vary. For example, some require lots of processing power, while others generate lots of network traffic. By being clever with workload placement, container users can maximize utilization levels for all of a server's resources, rather than just loading it up with several processor-hogging applications that leave some network capacity unused. 

Google engineers did exactly this and recently described how the company’s BORG container scheduler places workload to extract maximum use from its servers.  

With this type of isolation, it's now possible to place multiple microservices on a single server. The cgroup functionality ensures that no service can interfere with another, while container efficiency allows for higher server utilization rates.

There is, however, one caveat: You must run microservices in a redundant configuration to increase resiliency, and make sure they do not end up in side-by-side containers on the same physical server, because that defeats the purpose of redundancy. While it’s possible to manage container placement by hand to prevent colocation, it's much better to use a container management system such as Kubernetes, which lets you use policies to dictate container placement.

Faster initialization and execution

Containers enable finer-grained execution environments and permit application isolation. Both are great enablers for microservices applications, but what really makes containers a natural fit is their lightweight nature.

While virtualization provides clear benefits, there’s no denying that VMs, at 4GB or more in size, are large. I’ve already discussed the penalty this exacts on utilization, but it also means that VMs take a long time to get up and running. The time to bring all those bits off of disk and format them into an execution environment can be measured in minutes.

By their very nature, microservices-based applications tend to experience highly erratic workloads, and a virtualization-based microservices application can take 10 minutes or more to react to a traffic spike. During that time, users may find the application slow or completely unavailable. That's definitely not a desirable situation.

You can address this issue by pre-initializing VMs and having them standing at the ready. While not a bad strategy from a performance standpoint, it does waste resources as your standby VMs sit idly, using computing resources but not doing any useful work. You'll also need to have good insight into likely traffic patterns so that the right number of standby VMs are available. Unfortunately, this approach fails in the face of unexpected heavy volumes that can occur in today’s Internet world.

Containers, by contrast, are much smaller — perhaps one tenth or one hundredth the size of a virtual machine. And, because they do not require the operating system spin-up time associated with a virtual machine, containers are more efficient at initialization. Overall, containers start in seconds, or even milliseconds in some cases. That's much faster than VMs.

That's why, from a performance perspective, containers are a much better execution foundation for microservices architectures. Their quick instantiation maps much better to the erratic workload characteristics associated with microservices. It also makes them a better match for emerging policy-based microservices operations environments, since application topology decisions driven by policy (e.g., always have three instantiations of each microservice) can be more easily implemented via quick-starting, container-based application components.

Best choice overall

A microservices architecture does not dictate the use of containers. Netflix, for example, runs its entire microservices-based offering on Amazon Web Services, using AWS instances. But most organizations that move to microservices architectures will find containers a more congenial way to implement their applications.

Containers' finer-grained execution environments and ability to accommodate colocated application components in the same operating system instance will help you achieve better server utilization rates. And if your organization is running microservices applications in cloud environments, these characteristics will reduce your bill.

Finally, container-based microservices applications in production environments can better respond to erratic workloads. As companies begin to move more of the way they do business to digital offerings, shorter container initiation times can help increase user satisfaction and improve the financial performance of revenue-generating applications.

On your microservices journey, have you considered going all-in on containers as the foundation of your application execution strategy?

Keep learning

Read more articles about: App Dev & TestingApp Dev