Micro Focus is now part of OpenText. Learn more >

You are here

You are here

The state of container lifecycle management: Time for reinvention

public://pictures/davidl.jpg
David Linthicum Chief Cloud Strategy Officer, Deloitte Consulting
Container ship at dock
 

The core question for enterprises that are building applications using containers is: How should container lifecycles be managed? There are no easy answers. Few best practices, or even tools that push specific methods and processes, exist.

A basic mistake that enterprises make is to manage containers as if they were virtual machines, which they are not. Another is to manage them as traditional application workloads, which they also are not.

Containers are different from traditional or modern application workloads. They are self-contained and portable. While they can run by themselves, they can also run in clusters by using special cluster managers such as Kubernetes.

They can be distributed, with specific sets of containers taking on specific roles in an application. Thus, there can be dependencies between them that must be tracked and considered during testing, deployment, and IT operations

What does this all mean? Containers are different enough that special consideration should be made around how they are managed.

Reinventing the lifecycle

First, you need to understand the key phases of the container lifecycle and how to address them. Many of these phases have yet to be described. This breaks new ground, and the groundbreaking nature of containers should be considered when setting up your container lifecycle processes, selecting tools, and automating what you can automate. 

Container development

Containers are rarely built from scratch. Most container developers acquire capabilities from existing container image repositories, public or private (such as Docker Hub).

An important consideration is that containers use layered images. You can derive an application from a base image, which can be derived from another base image. This means that you can take the functions of one application delivered as an image and build new functions on top of that image. Thus, you need to understand how to maintain the dependencies upon the derived image as well as the new functionality.

The assumption should be that you’re rarely starting an application from scratch, and those charged with container development are looking more and more at using OPC (other people’s code), which is much more productive. However, it does bring up challenges with the lifecycle of the container, considering that you’re not tracking many dependencies.

It’s a pretty clear problem that must be solved. If there is a step where developers find and leverage prebuilt container images, you need to figure out where you’re going to store them and from where you’re going to access them to simplify the job of tracking them. 

You can leverage a private repository to store a copy of the base image you’re leveraging, as well as your extensions to the image. The advantage is that you retain an original copy of the image on which your customized version of that image is based. Moreover, you store the customized image back to the same private repository.

However, while this protects you from changes to the public image that you now depend upon—changes that could break your container-based application—you won’t enjoy the benefits of upgrades and bug fixes to the original image. 

The answer is to use a container-aware configuration management system. While it is possible to adapt traditional tools to container image management, there are limitations. For now, the best practice is to work with your current configuration management provider to understand how existing toolsets can be best used to manage container-based applications. But in the next year or so, be prepared to look outside of your existing DevOps tools vendor pool for better tooling. 

Container testing

There are many approaches to testing containers; you can find a complete description of these approaches in this post by Terra Nullius. It’s important to remember, though, that automation of container testing is key, no matter what approach you leverage.

With containers, you can place test automation inside or outside. You’ll want to look for repeatable test environments, where you can run the same test using the same automated testing tools. This means that the testing is part of the continuous integration tooling, staging tooling, and deployment tooling. In this way, testing will be systemic, from the point a developer is ready to test to the time it goes into production.

With the inside-the-container approach to testing containers, the testing tools are a part of the image. Thus, they are automatically configured.

With the outside-the-container approach, you can create special integration test containers. These will contain only testing tools and test artifacts such as test scripts, test data, and test environment configuration. While these are containers themselves, they don’t become part of the container image going into production, and thus they don’t affect the image size or performance. However, they are not auto-configured and need to be set up based on the testing tasks at hand.

The approach and tooling that you select for your container development need to be adapted to the objectives of the applications themselves. Keep in mind that you’re holistically testing services and microservices as well as containers.

Container security enablement

The basic idea of container security is that you first make sure you can trust the container; you reduce the attack surface and implement general management of vulnerabilities. The core way that you do this is to integrate security checks within the image and within your testing tools and processes, such as previously defined.

Vulnerabilities are managed by tracking all images from time of creation or use, through the layers created, as well as ongoing modifications in production and operations. You have to monitor the images in public image repositories, private image repositories, and in flight through DevOps or other lifecycle processes.

While there are many schools of thought as to how best to test for vulnerabilities within containers, what seems to be emerging as a best practice is to test images with external security testing tools. These tools can work down to the microservices level within the container images and walk through the layers as well.

In essence, you must assume that the images have vulnerabilities, either at their base or in the derived layers. You can scan each layer and image for vulnerabilities and fix those you find. This process is best described as systemic, and it's required at each stage in the DevOps or lifecycle processes, including development, testing, staging, and operations.

Keep in mind that compliance and governance should be checked at the same time that you do the security scans. This is really about set, customizable rules, versus security vulnerabilities that have common patterns with other container deployments. 

Container ops     

Most providers of ops-related toolsets, including those for performance, security, and governance monitoring; for taking automated corrective action; and for handling failover and recovery, now support containers. Chances are that you can use your monitoring and ops tools of choice to operate containers in production.

But there are core differences between operating containers, whether on premises or in the cloud, and monitoring traditional application workloads in production. For example:

  • Containers interact in production in complex and distributed ways, and you need the ability to monitor each container instance, no matter if it’s replicated or not, as well as any external resources that it accesses. 
  • Because you’re dealing with container images based upon container images, you need to understand the coupling there as well.
  • Security monitoring needs to be ongoing; you can’t scan-and-go as with traditional applications.
  • You need to understand how to monitor microservices as well, which typically takes a more fine-grained approach.

You must consider resiliency, which encompasses business continuity and disaster recovery ops. This typically involves creation of an active/active approach, where a replica of the container-based system in production is standing by and ready to go. Given the portability advantage of containers, using a different public cloud provider is a possibility as well.

In leveraging best practices, processing, and tooling around DevOps, you need to consider patches and upgrades to the production environment as they flow from the developers to ops. You will need tools that provide continuous integration, continuous testing, and continuous deployment, while remembering that tools are still emerging for the special needs of containers. 

The power of orchestration

When considering container ops, you need to explore container orchestration. Here, it is important to note that a container orchestration engine such as Kubernetes can pool multiple containers, which may reside in separate kernels, managing them as a single logical entity. This helps you coordinate the containers as a single application solution, and that makes them much easier to manage.

You should consider container orchestration tools as a path to define the deployment of containers, as well as a means for their ongoing management. That will cover ops issues such as availability, performance, and scaling, as well as networking and even production updates to the containers. Moreover, these tools can handle the application of rules and policies for host placement, provisioning, configuration, and scheduling.

Unexplored terrain

When you’re on the bleeding edge, count on bleeding. You may have a vast amount of experience with DevOps and traditional lifecycles, but container lifecycle management means rethinking most of what you did in the past. 

It’s time to take some risks and get ahead of these issues. You’ll have to solve these problems sooner or later if containers are a key enabling technology within your enterprise. 

Keep learning

Read more articles about: Enterprise ITIT Ops