Micro Focus is now part of OpenText. Learn more >

You are here

You are here

The state of containers: 3 build and deploy trends to watch

public://pictures/davidl.jpg
David Linthicum Chief Cloud Strategy Officer, Deloitte Consulting
 

Containers revolutionized the way enterprises do software development inside and outside of the cloud. And enterprises responded in a big way.

Most went all in with containers last year, according to the Portworx Annual Container Adoption survey. About 32% of companies spent over $500,000 on license and usage fees for container technology in 2017, up 5 percentage points from 2016. And 451 Research says containers are beginning to replace virtual machines. It expects the $1.5 billion market to grow to nearly $3 billion in 2020.

Now several trends are poised to take containers to the next level—if the industry can overcome some challenges along the way. Here's what to expect.

Container ecosystem continues to build

The appeal of container technology revolves around the ability to do lightweight virtualization and enable portability and, for development and operations teams, the ability to abstract away the applications and the platform into a simple-to-manage container. 

Enterprise adoption to date has been mostly around new implementations. While IT operations has shown interest in containerizing existing workloads, that's been difficult to justify. A significant amount of refactoring is required before applications can take advantage of container architecture features, such as microservices.

But it’s not all bad news. New applications that leverage container technology are much easier to manage with the emerging and continuously improving class of container orchestration tools such as Kubernetes. Moreover, a growing ecosystem of third-party tools and technology supports containers, including in the areas of data persistence, monitoring and management, security, and integration with existing cloud platform features, such as serverless computing.

Kubernetes leads—for now

Google’s Kubernetes has become the cloud container orchestration program that everyone wants to use, however, Docker's Swarm and Apache's Mesos are close behind. And those who contribute to the Kubernetes open-source project aren't resting on their laurels. This year, less than three months after the release of Kubernetes 1.10, Kubernetes 1.11 is nearly ready.

This latest release brings greater stability and enhancements to Custom Resource Definitions (CRDs). You can also leverage CoreDNS as the DNS plugin for the cluster. (CoreDNS is a domain name system module that will eventually replace KubeDNS as the de facto DNS plugin for Kubernetes.) Kubernetes 1.11 also adds support for raw block volumes to the Container Storage Interface (CSI).

Here's what's on my radar for cloud container orchestration in 2019.

1. Orchestration tools evolve, from centralized to distributed

Right now, most companies approach container orchestration by centralizing containers within a single orchestration tool instance. Moving forward, however, container orchestration tools will increasingly be used in a distributed manner. 

That means you'll have clusters that work seamlessly with other clusters, intra- or inter-cloud. While you can hack your way to distribution today, you need a set of open approaches that standardize how container orchestration distribution works, as well as how it’s managed, monitored, and secured.

DevOps has bolstered application development and tighter container orchestration is needed as well. You can find tools to manage containers that are part of clusters within a container orchestration tool. However, as with orchestration distribution, tighter integration is needed, and this effort should be led by open standards, not third-party tools. At the top of the list: shared standard testing scripts, deployment configurations, and resource management.

2. Container performance and ops management: Standards will evolve—slowly

Performance management is the ability to monitor the performance of container orchestration systems, as well as clusters and containers, down to the microservices level. While third-party tools play in this space, standard approaches and interfaces are needed that provide consistent interfaces for all performance management tools, including analytics and proactive performance management. 

When you deploy container-based systems as single containers to clusters that contain hundreds of containers, you face performance issues that can’t be solved just by placing those systems in a container orchestration platform. Indeed, the distribution and use of resources within containers follow the pattern of intra- and inter-container, and, in many cases, access resources outside of the container or cluster. This could include a legacy API, or command-line access to a database. 

Expect the complexity here to only get more complex. The Rube Goldberg-types of architectures that are cobbled together today by teams become the root cause of performance issues that are almost impossible to diagnose. 

The industry arrived here for a few reasons: First, those who designed open container and container orchestration systems had no idea how the systems would be used and abused. Second, while third-party tools are helpful, they don’t cover the same performance-affecting domains, such as I/O, and they take very different approaches to container-based performance management. 

We need a common open API approach, with native interfaces that provide the deep level of access you need to resolve container performance problems, either inside or outside of the container or clusters. This will be a hard problem to solve, but it’s a critical path if containers are to be successful going forward.

3. Container security and governance: New approaches, technology

The Kubernetes API service acts as the front door to a cluster, and that means it’s exposed on every deployment, since the API needs to be managed. That creates an open door that needs to be protected. There are authentication processes to access this port, but security is lacking because it’s possible to inadvertently expose the API where authentication is not needed. 

Yikes. 

Luckily, most Kubernetes deployments provide authentication for this port. Unfortunately, Tesla still exposed it inadvertently when it opened the dashboard that forms part of its main Kubernetes API service to the Internet without authentication.

In the Tesla case, attackers hid the malware behind an IP address and configured mining software to use a non-standard port. The attackers lowered the amount of CPU resources used, to help make the illicit mining harder to detect.

Missing from the container orchestration base open code set is the ability to tightly integrate these tools with identity and access management systems (IAM), which would allow the security of each cluster or container instance to be better managed. You can count on better security in the forthcoming releases of all orchestration technology, including tighter integration with IAM. 

The evolution of container security will be a huge focus through 2020, due to the steady stream of security issues that have been found within container and container orchestration technologies. The industry needs to go beyond IAM and IAM integration to focus on encryption services mixed with persistent storage, database integration that supports in-flight and at-rest data for most major databases, and container- or non-container-based storage. 

But here's the challenge: In order for the open-source container players to provide better security, they must provide better interfaces so that third-party container security providers can build better security software to address the issues. On one side, you have an interdependency between open-source users. On the other sit the for-profit vendors. These two groups must work together to build less-complex security solutions. So far, that's been a slow process. 

You won't get it all anytime soon, so plan accordingly

As containers have grown in popularity, the list of things enterprise development shops need has grown quite long. Available features in container development tools won’t be able to catch up with the demand for new features anytime soon, so container developers will need to be creative in finding solutions in the meantime. They could wait in the hope that common solutions will show up. But even if they do, eventually, the process will certainly take longer. 

So what should you do now? The trick is to plan out your use of containers for the next few years. Consider what you need—and what you're likely to get. The good news is that container standards, such as Kubernetes, are moving new releases forward quickly. The bad news is that those releases may not address your needs. In the meantime, you'll have to either wait for the code to show up or be clever in how you fill the gaps to build and deploy your containers. 

The fundamentals, such as security data, storage, and networking, will be the focus over the next few years. Eventually, the focus will shift to advanced distributed architectures, but in the meantime there are some things that can only be awkwardly cobbled together.

Things will get better, but not right away. Take that into account when formulating your container plans.

Keep learning

Read more articles about: Enterprise ITIT Ops