Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How containers can lift your DevOps game

public://pictures/brennan.jpg
Terry Brennan Managing Director, Orasi Software
 

The shift toward a DevOps approach is a challenging task that can be short-circuited for reasons as varied as a static environment, slow processes, inconsistent deployments, and cultural hurdles. DevOps requires changes in many areas, and as organizations demand that DevOps empower faster, predictable, agile delivery, many have turned to containers, which can bring a much-needed power boost to a DevOps transformation.

Containers can dramatically improve software delivery speed, platform independence, resource utilization, and process reliability, and they can provide the flexibility and pipeline velocity required to meet new business expectations. But while they can be a great path forward for organizations in today’s accelerating world, containers require a shift in thinking—and a commitment to learning.

Here's what your team needs to know to boost your DevOps transformation through containerization.

The container revolution

Containers offer many benefits. For example, they are immutable, extremely fast, and very efficient. But they also change the way we work.

In the past, a physical server had to be secured with all the necessary components (hardware and software) to run an application. This was a manual, slow, and inefficient use of resources.

Virtualization allowed us to abstract the tie to a physical server. A physical server was still underneath, but a hypervisor sat on top to manage the virtual servers, each running its own host OS. This approach enabled faster provisioning and improved resource utilization.

Containers take virtualization a step further; they are an abstraction at the app layer, packaging code and dependencies together. There is still physical hardware underneath with a host OS, but a container runtime sits on top and manages the containers sharing the OS kernel, each running as an isolated process. This allows the dynamic characteristics needed in DevOps and an even more efficient utilization of resources.

Image source: Orasi.

The DevOps power boost

Containers by themselves are intriguing, with the promise of benefits such as:

  • Improved development pipeline
  • Improved production deployment
  • Improved resource utilization
  • Greater platform independence
  • Improved scalability—up and down
  • Greater modularity and security

When containers are teamed with DevOps practices, they provide power boosts that allow DevOps to move toward the ideal of continuous flow, including:

  • Ephemeral environments
  • Build once, deploy many
  • Immutable environments
  • As code

Ephemeral

Dynamic, on-demand ephemeral environments are created on the fly on an “as-code” or artifact basis, leveraged for a specific activity, and recycled upon completion.

Static environments tend to be hoarded while we wait for the chance to use them, but the CI/CD pipeline instantiates and destroys environments as needed for activities such as continuous testing. These instances can be created at any time and can run in parallel to scale. This process needs to be repeatable, automated, consistent, and fast.

Image source: Orasi

 

Power Boost One: Because containers are lightweight, deploy very quickly, and require fewer resources, they are an ideal solution for ephemeral environments. Whether from a Dockerfile or image, containers can be spun up, used to execute an activity (such as testing), and recycled once the activity is completed. You can do this again each time you need a new environment.

Build once, deploy many

Ideally in DevOps, you follow through the continuous integration (CI) process to create a good artifact from code and store the artifact in an artifact repository (container registry), which manages binary components and supports deploying the same artifact to many environments. This is the “build once, deploy many” concept.

Image source: Orasi

 

For DevOps, the goal is to improve speed and reduce failures. If we deploy the same way in every environment leveraging the same artifact (build once, deploy many), the risk of a deployment issue in production is greatly reduced. We are also able to build efficient automation that can be reused at each level of environment (dev, QA, production, etc.) and improve deployment speed.

Power Boost Two: Containers support the “build once, deploy many” concept through images. An image is an immutable artifact that is leveraged by a container engine to create a new container. Images can be created and stored in an artifact repository and then used to easily and quickly create many container instances that can be deployed to different environments. With containers, this process is very fast and efficient.

Immutable

With an immutable infrastructure approach to managing the deployment of IT resources, components are replaced rather than changed—an environment is redeployed each time a change is needed instead of making changes to a long-running instance.

For DevOps, immutable environments help enforce consistency and eliminate the worry that changes haven't been applied to all environments. Rather than making changes to an environment, a new one is created fresh from version-controlled code or a pre-built image that is consistent each time. An immutable approach also helps by:

  • Forcing you to deploy many times, thus testing the deployment process over and over prior to production deployment
  • Supporting quick deployments and easier rollback that can reduce downtime

Power Boost Three: Containers naturally support the concept of immutable infrastructure. Images are immutable artifacts leveraged by a container engine to create a new container instance. Leveraging the image means each container is the same, can be deployed quickly, and can be recovered more easily. This image allows the container to be deployed in the same manner regardless of the level of environment (dev, QA, production, etc.). 

As code

This concept that everything should be represented “as code” and be version-controlled is applied in areas such as infrastructure and integration tools. This is foundational to driving the evolution of DevOps. Automation, configurations, databases, and data are now applying the same concept.

Power Boost Four: Containers can and should leverage the “as code” concept. In the container ecosystem, there are different levels to which “as code” can be applied. At a container level, a Dockerfile can be used to define the container in code.

A Dockerfile is a text file that defines a Docker image. The Dockerfile contains the command line calls that would be used to create an image. The Dockerfile can be stored in a source control management (SCM) system and version-controlled the same as other code types. This Dockerfile can be used to automatically create a Docker image, which can in turn be used to create the container instances.

Another area where the container ecosystem supports “as code” is with Kubernetes. YAML, a human-readable data-serialization language, can be used to define a Kubernetes configuration. As a superset of JSON, it can be used for configuration files as well. Using YAML for Kubernetes supports better automation, more consistent environments, and better velocity.

When used in combination, containers and clusters can be defined “as code” and thereby support a key DevOps concept that allows for better automation, version control, repeatable outcomes, and a faster pipeline.

Is your team ready for containers?

Containers can provide a boost for DevOps, but moving to containers requires a different way of work and a willingness to learn. If you’re trying to figure out how containers fit into your DevOps world, there are many aspects to consider.

Image source: Orasi

 

A continuous containers approach helps clear the fog as organizations try to meet demands for faster cycles and move toward a container footing. It enables you to execute on speed and agility throughout the lifecycle in an integrated and automated fashion.

For any organization seeking DevOps maturity, containers may help them get there. Yet configuring them correctly, ensuring they are secure, and deploying them successfully are complex operations. Consider: 

  • Deploying and managing containers requires a company-specific strategy and plan.
  • It involves changes in processes, people, and technology that are often entrenched.
  • Support activities, such as infrastructure automation, are also needed.
  • Firms that enjoy lasting success with containers also mature their DevOps practices.

Nevertheless, the benefits in speed, agility, efficiency, and resource reduction make containers absolutely worth the effort and expense.

Keep learning

Read more articles about: App Dev & TestingDevOps