Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Optimize DevOps with self-contained containers

public://pictures/Erez Yaary .jpg
Erez Yaary Fellow, Cloud Chief Technology Officer, Micro Focus
 

Keeping a software application up and running is a challenging task. The operations engineers responsible for maintaining back-end infrastructure have a difficult job, and it's even more challenging when each change the developers want to roll out requires changes to the infrastructure and its configuration. Recently, though, many organizations have begun to adopt container technology as a consistent, pre-configured delivery mechanism that makes deployment simpler and less error-prone.

Containers have emerged as a new way to deliver application and service workloads (i.e., an executable unit of code that performs work—in this context as an application) across the continuous integration/continuous delivery (CI/CD) development pipeline and well into production environments. This new paradigm changes the game of how applications can be decomposed into microservices with distinct functionality, effectively enabling the network to bind capabilities together into a complete, robust software service.

The Open Container Initiative is tasked with creating an industry-standard container format, which will eventually be ubiquitous. The format will allow for complete workload portability, freeing developers to focus less on process and tools selection, since all types of compliant container technologies will support the same container capabilities, such as orchestration and scaling.

But as the industry’s attention has turned to the container format definition, an important aspect has been neglected: the workload’s test and monitoring instructions. Today there are still gaps when it comes to streamlining test and monitoring instructions between dev and ops. Fortunately, there are practical methods you can use to optimize containers for a robust DevOps pipeline.

Multiple management aspects

During workload development and operation, many things must be considered, such as product quality and systems security and health. These aspects are handled by tools that have their specific configuration in proprietary and private file formats and repositories, often apart from the workload code and binaries.

Once a container is at a given stage in the pipeline, you should be using testing or monitoring instructions that match the workload code. However, the separation between the workload and testing or monitoring instructions often leads to errors and cumbersome integration processes, such as when it’s required to match specific automated tests with a specific workload version.

As the workload hits production environments, it is handed off to the operations team for monitoring and management. The ops team often learns of the workload for the first time. Thus, much of the valuable information and knowledge that resides in the developers’ minds is lost along the way, leading to an inefficient delivery process and a less optimal workload monitoring solution.

The microservice complexity issue

As the pace of innovation accelerates and more and more workloads are being developed in parallel using microservices architecture, IT professionals are packaging them in containers and quickly pushing them down the development pipeline and into production systems. This increased pace creates a high burden on the entire pipeline and the many tools that are part of it.

Bloated, traditional application architectures are being decomposed into many microservices, which emphasizes broken link and alignment issues, increasing the problem by an order of magnitude.

In fact, the container can hold not only the workload executable code, but also any additional instructions and metadata that might be required as part of a DevOps pipeline, such as testing or operations instructions.

Put your workload alongside its test and monitoring

Products that test the user interface (UI) or API functionality have to store testing instructions in their internal data repositories, which are often governed by configuration systems that differ from the code they test.

This has the potential to cause frequent misalignment between source-code evolution into executable binaries and the supporting testing instructions. Current testing products are not based on a descriptive testing model that can be efficiently managed as code, much less be stored and used within the container.

Embed test as code in the container

The file format for container images is based on a file system that stores workload binaries and any required dependencies and configuration information. To leverage that, you can create a new folder within the container image filesystem to host the testing instructions. The developer who created the feature/capability should initially create these instructions.

Once a CI system deploys the set of containers on a target staging or production environment, it hands the deployed container locations to the testing tool and requests that testing commence. The testing tool inspects each container for embedded testing instructions, reads the instruction set, and then executes them on the target container under test.

Once a test cycle ends, the testing tool can store test results back inside the container, making sure the container stores any required status results to denote code quality for manual or automated checks farther down the pipeline.

Monitoring operations as code

Embedding monitoring requirements and specifications alongside the workload within the container also has the benefit of keeping both development and operations in sync. The workload progresses in features and functions such that the monitoring instructions (which should also be specified by the developer in a similar way to the testing instructions I described above) are maintained in sync with the workload. This serves to codify the operations of the new functionality while it progresses down the development pipeline.

As you deploy the container in the target environment, the installed operations tools will be notified of the deployment and will be ready to inspect the container's contents for operations instructions. These tools then act upon those instructions, automatically activating the relevant operational activities, such as monitoring, security and hardening of the container and underlying operating system, and so on. This hardening of the operating system ensures that it conforms to how passwords are treated, which ports can be open or should be closed, and so on. The idea is to have relevant hardening instructions built into the container, and as the host is asked to run that container, it (or a supportive service) makes sure the operating system is hardened for the specific workload.

Having operations specifications embedded within the container helps solve the traditional boundary of knowledge and expertise between development and operations. All information required to monitor workload in production is embedded within the container, and once it hits production, you only need production systems such as monitoring tools to inspect the container and automatically activate whatever monitoring instructions are contained within.

This also solves the latency associated with going live that is so often inherent with current solutions. Because everything required to monitor the workload is embedded within the container, the monitoring systems can be automatically involved whenever a new container is deployed, with no human intervention, enabling seamless operations. This in effect creates a lights-off operations environment.

Modernize your CI/CD pipeline

In a container-optimized pipeline, the container itself could be harnessed to serve more of a purpose than just holding the workload.

By embedding pipeline phase instructions for testing, monitoring, and other instructions, as well as storing phase results within the container, you are effectively embedding the pipeline state machine within the container itself. At any given pipeline checkpoint, you can inspect the container to validate which phases it has undergone and what the results were, calling up additional CI and delivery pipeline phases based on what passed or failed. This simplifies application lifecycle management (ALM) systems, since they just need to bind themselves with the pipeline artifact—the container—in order to report back the status to stakeholders.

Development pipeline steps also could be encoded as Docker instructions, such as "Test" or "Monitor," and enhance the Docker Command Line Interface (CLI). Imagine the power of calling a Docker "Test" command to enable an automated self-testing container, or composition of containers, or calling Docker Monitor to commence monitoring activities.

This would yield huge optimization benefits, because it can simplify container-based pipelines, along with consolidating pipeline instructions, results, and state machines within the container itself.

Interim steps

While containers are an easier and less error-prone way to deliver applications, the Open Container Initiative must not let test and monitoring instructions get left behind on the journey from Dev to Ops. Until container testing and monitoring is standardized though, the techniques discussed above will help you on your way to creating a highly efficient DevOps pipeline.

Image credit: Flickr

Keep learning

Read more articles about: Enterprise ITIT Ops