Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Container interoperability: Do standards really matter?

public://pictures/davidl.jpg
David Linthicum Chief Cloud Strategy Officer, Deloitte Consulting
Containers on a ship
 

"We didn't standardize our tech, just a very narrow piece where it made sense."

After Docker founder and CEO Solomon Hykes made the statement above earlier this month, the community of IT professionals who currently use containers may have experienced a growing sense of unease. 

Given Hykes' sentiment, one might wonder, if you leverage current container standards, what will become of those standards going forward?  Will they be proprietary to one vendor or provider? How much interoperability is possible? And how much of the standards will the community control? 

At issue is the poor track record of many standards.  While some, such as the standards surrounding Linux, have worked somewhat well, hundreds of others have fallen onto the standards trash pile, having either been absorbed into a proprietary technology or disappeared altogether. 

That said, the container pitch is all about standards, including the rise of the core de-facto standard: Docker.  While it seems as though Docker is committed to at least some support for standards, it is, at the end of the day, a business.  As such, its management will operate in its own self-interest, and you should expect nothing less. And that's not always a bad thing.

So what does all this mean for standards and how you choose what technologies to use?

Where the container competition stands

While the Docker name has become synonymous with containers, but it’s not the only container game in town.  Other vendors, such as CoreOS, Google, Microsoft and Amazon, all view containers as a huge business opportunity as well.  All are working in the Linux Foundation coalition, including Apcera, Cisco, EMC, Fujitsu Limited, Goldman Sachs, HP, Huawei, IBM, Intel, Joyent, Mesosphere, Pivotal, Rancher Labs, Red Hat and VMware.  That’s pretty much every organization that has a stake in the still evolving container ecosystem.

A group of container loyalists believe that Docker should not become the way that the industry defines containers.  CoreOS has Rocket, a competing container runtime, as well as its own container format.  In addition, Google, Red Hat and VMware have aligned with CoreOS. 

While Docker and CoreOS  looked like they were going to battle it out in the market, that hasn't happened. Both have decided to cooperate, at least for now, and both are stakeholders in the Linux Foundation's Open Container Project (OCP).  

The Docker format and runtime forms the foundation of the evolving OCP standard, and Docker, to its credit, will provide both the draft specifications and the code around its image format and runtime engine.  This has jump-started the project. Now the container community is waiting to see what will come from it. 

While Docker is giving up some of its technology to form the standard, it can’t give it all up and still have a viable technology business. Discussions about these kinds of tradeoffs happen all of the time in technology companies, not just with Docker.  The container community must consider the fact that, if standards are driven by companies, then those companies must be viable long term. 

Container standards: Going beyond the core

The core container format and runtime are of the most interest to IT practitioners looking for standards.  But let’s move out from the core to the cluster managers that most containers will use.  Here things are a bit more exclusive (proprietary and less interoperable) than are the core containers.  Right now, these seem to be managed together by CoreOS and Docker. 

Google Kubernetes

Google Kubernetes is an open source container cluster manager that can manage containers, including the way they can scale and become more resilient.  It can schedule containers, allocate them, and manage disk space and storage. 

Kubernetes pretty much set the standard for what a container cluster manager should be doing.  As such, it has the largest market share, based on what I'm seeing out there.  But some surveys show Google coming in second to Docker, with Google establishing a larger presence in larger enterprises.  These sorts of contradicting data points make picking the standard-bearers even harder.  Most companies are looking for levels of adoption, as well as the relevance of a given standard. 

Docker Swarm

The Docker Swarm cluster manager offers pretty much what Kubernetes offers, including clustering, scheduling, and integration capabilities that let developers build and ship multi-container/multi-host distributed applications.  It includes all of the necessary scaling and management for container-based systems. 

But Docker Swarm is more of a product than it is a standard, which is perhaps what Docker intends.  While Docker may be the keeper of the core, the value-added technology that you need to get containers into production will likely remain within the domain and control of the vendor who invented them. 

CoreOS Tectonic

CoreOS has a play here as well.  Its Tectonic cluster manager, essentially Kubernetes as a service, is available on Amazon Web Services or as an on-premises product.  Tectonic is compatible with both the Docker and CoreOS Rocket containers, as well as all of the container cluster managers listed above.

Apache Mesos

Finally, the open source Apache Mesos cluster manager is known for stability, and you can use it with Docker to provide scheduling and fault-tolerance.  Mesos uses a Web user interface for its cluster management dashboard, and is commonly used in larger container installations, where scalability can’t be compromised.  

Some container technologies will remain proprietary

Going forward,  it’s likely that the container management, security, and storage markets will be as big as the container space.  However, these products are unlikely to be built on open standards.  Yes, there will be a few open source products.  But I suspect they won’t get the same level of support as the proprietary, for-profit products, which will focus more on the deeper problems around deploying, securing, and managing containers. 

I am not implying that open standards and related products won't have any role to play.  Some will exist at the core of Docker and containers. But the real solutions will be proprietary, and that’s actually not a bad thing. 

Why standards won't save you

To look at the value of container standards, it’s important to consider when it's important to be open—and when it’s OK not to be. Frankly, standards won’t save you as you move applications to containers, and I'm not sure they ever did in most areas of software. 

So, what should your container standards strategy be? First, understand that, to the extent that container standards will make the most difference at the core. The format and the runtime should all be standardized, and it appears that both Docker and CoreOS are heading in that direction with common container standards. That's the good news. 

Moving out from the core containers, you'll have other problems to solve. Container cluster managers, for instance, may include some open source options. When selecting the right container cluster manager for your needs, however, you may find that proprietary technology is a better fit. For most enterprises, the same can be said for security, storage, governance, and other services that you'll need when deploying applications based on clusters

Organizations that look to open standards as a kind of policy, even a culture, may need to make a few compromises when moving to containers. At the end of the day, you need to consider not just the overall cost of the technology, but your ability in the long term to support the technology, and to get updates as needed. 

Open source or standards-based offerings may not provide the value that you need to deploy and operate containers.  In some cases it would be penny-wise and pound-foolish to follow this path if your standards-based container technology lacks important features, creating additional costs around missing capabilities, support, and performance. 

Thinks about needs first, not standards

Some enterprises are so in love with standards that they allow them to drive most of their technological decisions.  While this leads to a smaller software bill, it could also lead to inefficiencies that cost the company millions.  So, while you can debate whether or not use standards-compliant products today, your best bet is to understand your own requirements, and then back the right technology into those. 

In other words, your best path is the selfish path.  Just as container technology providers are looking out for their own bottom lines first, you need to look out for yours.  Standards are good things to have around, and some do work.  But container standards today are a hit-or-miss proposition. 

Don’t get me wrong: Standards are a good thing. They  provide businesses with the ability to get on the same page, in terms of doing things in the same way.  We figured out common electrical outlets, network connections, even information interchange for financial data.  In the nascent world of containers, however, some things are still just a bridge too far. 

Image credit: Flickr

Keep learning

Read more articles about: Enterprise ITIT Ops