Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to manage full-spectrum DevOps that spans multiple clouds

James Kobielus Lead Analyst, SiliconANGLE Wikibon

DevOps is evolving to ensure seamless release pipelines for applications and infrastructure across cloud computing environments.

This "full-spectrum" DevOps shift goes up the cloud stack, addressing continuous integration and continuous deployment (CI/CD) for infrastructure-as-a-service, platform-as-a-service, and software-as-a-service components.

Here's how DevOps professionals can implement and manage full-spectrum CI/CD workflows that span the breadth and depth of their multi-cloud deployments.

Assess the multi-cloud DevOps challenge

It can be difficult for enterprise application development or IT operations professionals to get their arms around multi-cloud DevOps​​​​​.

That's because a single multi-cloud deployment might depend on hundreds of remote application and infrastructure components. These, in turn, might use myriad interfaces and orchestrations, and execute in dozens of clouds with varying degrees of federated interoperability.

Multi-clouds have already arrived in enterprises. According to a recent survey of 1,106 business and technology executives, published by the IBM Institute for Business Value, some 85% of companies are already operating multiple clouds. Nearly all others said they will be using multi-cloud within three years.

However, only 39% of respondents have implemented DevOps processes and tool chains across these deployments.

To ensure the CI/CD of applications and infrastructure across increasingly heterogeneous clouds, enterprises will need DevOps tools that integrate with those platforms. The tools should allow for flexible movement, monitoring, scaling, and transparency and the management of infrastructure and application components, data, workflows, metadata, and business logic. 

The need to manage hybrid clouds

Hybrid public/private clouds may remain enterprises' principal multi-cloud deployment pattern for some time. This is because some workloads, data, apps, and other assets will likely need to remain on premises, such as those that are latency-, security-, privacy-, and compliance-sensitive.

Consequently, running public cloud infrastructure and services on premises may be the preferred approach going forward.

Availability of hybrid-cloud DevOps tooling should be a key criterion when evaluating hybrid cloud solutions. The latest batch of tools includes the recently announced Amazon Outposts, which should be available in the second half of 2019. 

With Outposts, AWS will provide fully managed and configurable compute and storage racks for deployment in customers' traditional data centers, running such services as Amazon Elastic Compute Cloud and Amazon Elastic Block Store. It will allow AWS users to run compute and storage on-premises, while seamlessly connecting to the rest of AWS’s broad array of services in the cloud.

Vendors have indicated that Outposts will provide users with a consistent infrastructure-as-a-service experience, whether on premises or in AWS data centers. Customers who want to use the same VMware management console they've been using to run their hybrid-cloud infrastructure and applications will be able to run VMware Cloud on AWS locally on AWS Outposts. 

Customers who prefer to use AWS's management console, but on premises with Outpost will be able to do so. However, as of when this story was written, there is no associated DevOps workflow tooling to manage CI/CD of infrastructure and application components across AWS and VMware clouds that span a common Outposts deployment. Nevertheless, there are third-party multi-cloud DevOps tools that integrate with AWS and are worth exploring.

Employ infrastructure-as-code tools 

Infrastructure as code is an emerging best practice that manages the functional platform components within cloud operating environments. The technique does so in the same way one manages such application components as code builds, machine images, containers, and serverless functions.

As an organizing framework for DevOps in cloud management, this approach eliminates the need for IT professionals to touch physical IT platforms, access cloud providers' management consoles, log into infrastructure components, make manual configuration changes, or use one-off scripts to make adjustments.

As an alternative to traditional IT change-and-configuration management, infrastructure as code involves writing templates—a.k.a. "code"—that declaratively describe the desired state of a new infrastructure component. This component can be a server instance, virtual machine, container, orchestrated cluster, or serverless functional app.

Within IT management tooling that leverages underlying DevOps source control, the template drives the creation of graphs of what the cloud infrastructure codebase should look like. The tooling then looks for deficiencies in deployed code and fixes them by deploying the end-to-end code, so that the end-to-end deployed infrastructure converges on the correct state.

In this way, infrastructure as code supports automated, repeatable tasks in the cloud DevOps pipeline. These tasks include provisioning, configuration, testing, and deployment of virtual machines, containers, or serverless functions at scale.

Typically, the underlying cloud infrastructure being managed as code is immutable, which means the underlying infrastructure component instances are never modified or updated after they are deployed. Instead, when a functional infrastructure component needs to be fixed or updated, it is deprovisioned at the same time the replacement component is provisioned in the infrastructure-as-code DevOps workflow. This includes making the necessary modifications from the same base machine image.

This practice eliminates the need for patching and in-place server upgrades. It also ensures full consistency across all deployed component footprints. It greatly reduces potential attack surfaces in the infrastructure, while ensuring that there are no one-offs or drifts in deployed component configurations

To the extent that you've already begun to do DevOps in a hybrid or other multi-cloud deployment—perhaps through two or more infrastructure-as-code templates—you may have to consolidate these tools as you ramp out into multi-clouds.

The infrastructure-as-code tools that you currently use may be provided by one or more public cloud vendors (e.g., AWS CloudFormation, Azure Resource Manager, Google Cloud Deployment Manager), by your established DevOps vendors (e.g., Chef, Puppet, Ansible), or by any of the growing number of third-party DevOps vendors whose solutions address diverse public, private, hybrid, and multi-cloud deployments (e.g., Terraform, Saltstack, Juju, Docker, Vagrant, Pallet, CFEngine, NixOS).

Use DevOps for hybridized virtualization, containerization, and serverless 

Hybridization of public and private cloud infrastructure management workflows isn't the only challenge facing enterprise DevOps professionals. Infrastructure-as-code tools should be implemented in the same DevOps workflows that apply to virtual machine images, containerized microservices, and serverless functions.

Increasingly, multi-cloud DevOps must also span hybridization of application patterns at any or all of the following levels:

Hybrid virtualization

At the infrastructure-as-a-service level, virtual machine images are the coin of the realm. Enterprises may have already deployed a wide range of hypervisor solutions associated with various on-premises and public-cloud computing environments, including those from VMware (vSphere), Amazon Web Services (AMI, KVM, and Xen), Microsoft (Hyper-V), Google (KVM), IBM (PowerVM), Oracle (VM Server), and Red Hat (KVM). 

This creates a never-ending flow of machine images that must be managed within the end-to-end multi-cloud DevOps workflow. Vendors of each hypervisor usually provide tools for managing machine images native to their own environments, with varying degrees of support for images sourced from other virtualization platforms.

Hybrid containerization

At the platform-as-a-service level, more enterprise multi-clouds involve hybrid container platforms. Typically, this involves the federation of container orchestrations across various Kubernetes, Docker Swarm, Mesosphere DC/OS, Amazon Elastic Container Service, HashiCorp Nomad, or other orchestration backbones. 

Within the vast Kubernetes ecosystem alone, there are several dozen certified vendor distributions and hosted platforms on the market. Although they all implement the core open source code, they are not necessarily interoperable out of the box.

Typically, container orchestration technologies run on clusters within distinct hardware and application platforms in the multi-cloud. Each containerization environment usually includes its own native DevOps capabilities, but with varying support for containers and orchestrations native to other environments.

From an enterprise standpoint, implementing unified DevOps workflows that span these container hybridizations may require multi-cloud DevOps tools.

Hybrid serverless

At the function-as-a-service level, one might build multi-cloud applications that call the APIs of two more public serverless offerings, such as AWS Lambda, Azure Functions, Google Functions, or IBM Cloud Functions.

Likewise, it's possible to have more complex hybrids that encompass both public and on-premises-based serverless environments, such as Oracle Fn and Red Hat OpenShift Cloud Functions. One can build hybridized serverless apps on top of Kubernetes by leveraging the Virtual Kubelet specification. This abstracts the core Kubernetes kubelet function so it can connect orchestrated, containerized microservices to other serverless APIs.

Alternately, developers who want to drive DevOps across heterogeneous serverless clouds can use infrastructure-as-code tools such as HashiCorp Terraform or Gloo. For managing secure DevOps workflows across heterogeneous serverless clouds, you might use a tool such as Protego. This tool supports AWS, Google Cloud Platform, and Azure, and functions using Node.js, Python, and Java runtimes.

Start now—don't wait

Managing complex multi-cloud environments requires full-spectrum DevOps to drive CI/CD across both public and private clouds as well as across hybrid virtualization, containerization, and serverless fabrics. Enterprise application development and operations professionals who wish to implement continuously automated software development, testing, and release pipelines across their multi-clouds should heed the following advice:

  • Assess the extent to which your enterprise requires end-to-end DevOps that spans two or more public or private clouds, as well as diverse virtualization, containerization, and serverless application environments.
  • Identify the extent to which your enterprise's existing infrastructure-as-code and other DevOps tools support CI/CD workflows for infrastructure and application components—such as code builds, machine images, containers, and serverless functions—across your hybrid multi-clouds.
  • Evaluate the growing range of commercial DevOps tools to determine whether they integrate into your various clouds' native DevOps tools, can build automated CI/CD workflows that span them seamlessly, and can manage the deepening stack of code, machine images, containers, serverless functions, machine-learning models, and other assets at the heart of your multi-cloud application environment.

What's next: full-spectrum multi-cloud DevOps workflows

Industry frameworks for building full-spectrum multi-cloud DevOps workflows that span heterogeneous public and private cloud platforms are beginning to emerge. Initiatives such as Knative and DECIDE show promise, but they are embryonic and not yet implemented in commercial solutions. Organizations such as the MultiClouds Alliance also may have some products or services in this regard.

It may take a year or two, or longer, for these to come to fruition and to be incorporated into commercial multi-cloud DevOps offerings. But it would be pointless for enterprises with multi-cloud DevOps requirements to put their tooling plans on hold until the requisite standards-based offerings are available and mature. You don't need to wait.

Keep learning

Read more articles about: App Dev & TestingDevOps