Micro Focus is now part of OpenText. Learn more >

You are here

You are here

7 container design patterns you need to know

public://pictures/5920387.jpeg
Christian Meléndez Cloud/DevOps Engineer, AutoWeb
 

Containers are popular right now because they help move applications forward in a consistent, repeatable, and predictable manner, reducing labor and making app management simpler. 

But how do you know if you're using containers properly? That's where container design patterns come into play. Here's what you need to know about container design patterns and why you need them, along with seven common design patterns to consider and how to choose the right one for your needs.

Design patterns exist to help you solve common problems with containers. They also provide a common language when communicating about the architecture of the applications. That way, everyone can understand what's going on.

Design patterns ultimately help make containers reusable. Users of those containers will give each their own purpose. There are times when I don't need to have a complex configuration to test locally, but at the same time, I don't want to modify the architecture so much that I lose consistency when testing. That's why having a baseline is helpful: to reuse containers and make things simpler to test.

What's great about these patterns is that you can combine them to make applications more reliable and fault-tolerant. Here are seven your team should consider.

1. The single-container design pattern

Employing the single-container pattern means just putting your application into a container. It's how you usually start your container journey. But it's important to keep in mind that this pattern is all about simplicity, meaning that the container must have only one responsibility. That means it's an anti-pattern to have a web server and a log processor in the same container.

Containers are commonly used for web apps, where you expose an HTTP endpoint. But they can be used for many different things.

In Docker, you have the ability to change the behavior of a container at runtime, thanks to the CMD and ENTRYPOINT instructions. So I'm not limited to using containers for HTTP services. I can also use them for any bash script that accepts some parameters at runtime.

By letting containers change behavior at runtime, you can create a base container that can be reused in different contexts. So you'd use the single-container pattern to expose an HTTP service or to reuse a script for which you don't want to worry about its dependencies. And it would be a good choice, as long as you keep in mind that containers should solve only one problem.

2. The sidecar design pattern

So containers should have only one responsibility. But what about the use case I mentioned before, where you have a web server with a log processor? Actually, that's one of the exact problems that the sidecar pattern aims to resolve.

Using the sidecar pattern means extending the behavior of a container. In our example of the log processor for the web server, the log processor could be a different container reading logs from the web server.

The web server will need to write those logs to a volume. In Docker, volumes can be shared with other containers. It's preferable to have this separation because it makes packaging, deployment, resiliency, and reuse easy—and also because not all containers will need or use the same resources.

With this pattern, you're decoupling your system in different parts. Each part has its own responsibilities, and each solves a different problem. You're eating the elephant in small chunks.

3. The ambassador design pattern

If you're using the ambassador pattern, it means you have a proxy for other parts of the system. It transfers the responsibility to distribute the network load, retries, or monitoring to something else. A container should have one responsibility and be as simple as possible. For a container, the communication to the outside world will simply be an endpoint. It won't know (or care) if what's out there is a set of servers or just one server.

This is the pattern you'd use when you want microservices to interact with one another. They don't know exactly where other microservices are; they just know they can find them by name. And for that, they need a service discovery. This discovery could be at the DNS level, or it could be at an application level, where microservices register. Service discovery will be in charge of keeping only healthy services.

In Docker, this is possible because containers can live on the same virtual network. When you use Docker Compose and you link containers, it basically modifies just the "hosts" file so the call to a service is by name, not by IP address. Also, Docker supports environment variables to inject values such as subdomains for a proxy server that you can change depending on the environment.

4. The adapter design pattern

Using the adapter pattern means keeping communication between containers consistent. Having a standard way of communicating via a set of contracts helps you to always make requests in the same way, and lets you expect the same response format. It also helps you easily replace an existing container without the consumer or client noticing because the contract won't change—just the implementation changes. You can also reuse this container somewhere else without having to worry about managing other application logs.

Analyzing logs from different sources can be a pain if you don't have a standard format. When you have a container that works as an adapter, it will receive raw logs. It will standardize and store data in a centralized place. The next time you need to consume the logs, you'll have a consistent format, and so it will be easier to understand, correlate, and analyze logs.

The main premise here is that the adapter pattern allows a container to reuse a solution for a common problem in the system.

5. The leader election design pattern

If you're using the leader election pattern, it means you're providing redundancy for consumers of containers that need to have highly available systems. You can see this pattern in tools such as Elasticsearch, an open-source stack. Elasticsearch's architecture consists of more than one node, and each node will have chunks of data (shard) for replication and redundancy purposes.

When the service starts, a node is elected as the leader. If the service goes down, the rest of the nodes elect a leader based on certain criteria, keeping the cluster healthy.

So how is this related to containers?

Well, you can spin up a bunch of containers that communicate with one another without needing service discovery. Elasticsearch containers will elect a new leader, and then you can spin up a new one in just seconds, either manually or automatically, by using an orchestrator such as Kubernetes. Doing the same thing with virtual machines or physical servers could take minutes or even hours.

6. The work queue design pattern

The work queue pattern dictates that you split up a big task into smaller tasks to reduce running time. You can think of this as the producer-consumer problem. Say a user requests that you transform 1 million records. This will take a lot of time. So to speed up the process, you'd employ the work queue pattern and transform the data into smaller chunks of 100 records each. The code that does the processing work can be packed into a container, and then you can spin up 10 containers at the same time.

Containers are really useful for batch processes. You might need to worry about resources being able to support concurrency, but if you don't, there are tools or services such as AWS Batch that help you manage resources. You just need to provide a container and launch a set of execution jobs.

Containers will help you make the code reusable and portable. But coordination is a problem better solved by container orchestrators.

7. The scatter/gather design pattern

The scatter/gather pattern is quite similar to the work queue pattern in the sense that it splits a big task into smaller ones. But there's one difference. Containers will immediately give a response back to the user. So instead of launching a bunch of tasks and forgetting for a moment about the actual response, in this pattern, you'd need to combine all small responses into just one. A really good example of this pattern is the MapReduce algorithm.

To implement this pattern, you need two containers. The first will do the partial computation that returns all small chunks needed (map), usually not in an ordered way. This container will then do a request to the second container you need, the one in charge of merging all parts, to return data that makes sense to the user.

With this pattern, you're only focusing on developing each part independently, and you can spin up and use as many containers as needed.

Which design pattern to choose?

Which pattern you should pick out of these seven depends on several factors. There's no silver bullet. Each design pattern has its own purpose and solves a different type of problem. Actually, you might want to apply more than one at the same time in the same system.

These design patterns for containers let you focus on developing the mindset to comprehend distributed systems. They give you the ability to reuse code and have fault-tolerant and high-availability architectures with optimized resources.

I've just scratched the surface of each pattern. Hopefully, you've learned enough to know which ones may be right for your application. I encourage you to further explore the patterns you think will best target the problems you face.

Keep learning

Read more articles about: Enterprise ITIT Ops