Micro Focus is now part of OpenText. Learn more >

You are here

You are here

The top 5 container adoption and management challenges for IT Ops

public://pictures/jennifer_zaino.jpg
Jennifer Zaino Freelance writer/editor, Independent
Shipping container hub
 

Containers aren’t new, but they are making a bigger impact in IT than ever before. Among IT pros with knowledge of their company’s financial investments, close to 70% say their company is making an investment in containers, according to a 2017 survey conducted by Portworx, a data storage company for containers. Nearly one-third of companies are spending $500,000 or more a year on license and usage fees for container technologies, up from a reported 5% last year. That trend has also presented several new challenges for IT operations.

Developers, of course, have eagerly embraced containers over virtual machines, drawn by Docker's introduction of an image format that makes it easy to build and distribute application code and dependencies. That helps them work more rapidly and better meet the always-pressing demands of business units and customers, says Chris Ciborowski, CEO and principal consultant at enterprise DevOps consulting firm Nebulaworks.

But the move into the next phase of adoption—running enterprise containers in production at scale—leaves IT operations management and staff with many questions and concerns. As a group, they’re held responsible if a website goes down or the ability to transact business is otherwise harmed. “Developers are incentivized by the speed of features and fix delivery, and they like the freedom containers give them to do their work in an agile way,” Ciborowski says. “Conversely, IT Ops is risk-averse and needs time to feel totally comfortable with using containers in prime time.”

Let the challenges begin

What are some of the container adoption and management challenges that IT Ops personnel are struggling to get their arms around? These rank among the top issues:

Adapting processes to support containers

While developers have been taking an agile approach for a long time, IT Ops staff largely haven’t been thinking in the same way. “Due to the nature of how container images are created (highly automated build, test, and release of apps and dependencies) and the velocity of container creation and scheduling (deployment), existing IT operations processes are unequipped to support container delivery beyond simple use cases,” Ciborowski says.  

Updating processes for production-scale container adoption necessitates looking at teams, responsibilities, and tool chains in order to remove constraints and increase IT operations' flexibility and agility. Objectives, for example, should include dispensing with manual oversight when it comes to putting containers into production and fully adopting automation in its place.

“Having to go through 7 to 15 people on a decision review board doesn’t support the velocity executives are expecting when they think about containers,” Ciborowski says.

Teams, adds Lucas Vogel, the founder of Endpoint Systems, a system integrator and developer of endpoint software, must devise processes that enable mastery of the highly distributed nature of container platforms and monitor and log feedback that developers require in order to stabilize and enhance their applications.    

Choosing the right container technologies

This is a fast-paced and rapidly evolving market, with startups and seasoned tech vendors keeping the momentum going on everything from container platforms to repositories to orchestration tools. “The rapid evolution of container platform components such as orchestration, storage, networking, and systems services like load balancing are making the entire stack a moving target,” says Tom Smith, research analyst at DZone.com.

IT Ops teams are legitimately concerned that that makes it difficult to have a stable application or service on top of containers, Smith says, and about “risky decision making with regards to a well-informed container implementation strategy.”

On the other hand, waiting for the industry to work its way through standardization, consolidation, and simplification can be a recipe for paralysis, potentially leaving an enterprise’s IT operations in a catch-up state by waiting too long to decide on solutions and approaches.

As the innovator behind container images, Docker is likely to have a seat at the table for the foreseeable future as IT Ops starts thinking seriously about full-scale container deployment, says Ciborowski. He doesn’t think there will be just one winner across the container spectrum. Still, there’s enough action to create concern among IT operations management and staff that they make smart and safe choices.

“There is up-skilling required to put a knowledge base in place, so that IT Ops understands what these container technologies do, how they are networked, what are their storage requirements, and the short- and long-term impact of their decisions,” he says. They need to be as prepared for potential future technical debt as are developers who are already familiar with the concept—that is, they must understand that their container choices can lock their organizations into a particular way forward that may require significant time and effort to unravel in order to start anew.

“IT Ops is now more exposed to technical debt creation, too,” he says. “They can make a decision on one technology today. And what happens in two years if it doesn’t have all the characteristics they need for the apps in their environment, and they end up requiring a ‘forklift upgrade’? It’s incredibly expensive, so it goes back to getting as educated as you can from the start” to make a more informed decision and minimize technical-debt exposure as much as possible.

Renat Zubairov, CEO at hybrid integration platform vendor elastic.io, says, “Invest in people who know what is good and what is not so good when using container technology, who would be able to choose a better supporting technology on a reasonable basis, and who are open to changing that decision in the future, as soon as needed, for moving forward with container technology.”

Maintaining container security

Security has reared its head as a major challenge in the container world, just as it has almost everywhere else. “There is lots of innovation so far in containers and container orchestration systems, but security controls are lagging behind,” says Brian Kelly, head of Conjur Engineering at security technology company CyberArk.

A security breach of a container is almost identical to an operating system-level breach of a virtual machine in terms of potential application and system vulnerability, according to Matt Baxter, director of development at indoor mapping intelligence vendor Jibestream. “It’s critical for any DevOps team working on container and orchestration architecture and management to fully understand the potential vulnerabilities of the platforms they are using,” he says.

Orchestration platforms present “a substantial attack surface,” confirms Smith, who recommends restricting access and applying appropriate controls to help thwart problems. For instance, open-source container orchestration tool Kubernetes’ etcd primary datastore currently stores secret data (objects meant to hold sensitive information) as unencrypted plain text on disk. Kubernetes' own documentation advises administrators to limit access to etcd to admin users or to wipe or shred disks when no longer in use by etcd.

Kelly also notes that security can suffer as a result of the smaller footprint of containers and the highly dynamic nature of container orchestration systems, that dynamism leading to a server landscape that is no longer predictable. “The first step to solving this is to at least ensure that every computing resource (containers included) has its own machine identity,” he says, and that requires applying familiar privileged access management concepts to those entities.

“Without that basic knowledge, it will be very difficult to control which applications have access to which databases or to shut down an application that is suspected to have been breached, or to properly manage which teams have permissions to update which applications,” he says.

Optimizing your infrastructure for containers 

Among the challenges that enterprise IT operations management teams face with containers is the need to rethink the underlying infrastructure to accommodate the technology. While they may not want to embrace the public cloud for critical applications just yet, IT Ops managers do need their on-premises infrastructure to be able to scale up and down in the same fashion when applications are containerized, says Nebulaworks' Ciborowski.

“The reality is that to maximize the positive impact of container adoption, you must think of and treat your on-premises infrastructure as if it is a public cloud provider,” he says. Cloud-native applications and microservices for containers, he says, are best supported by infrastructure that has the same characteristics as public cloud offerings.

That means “infrastructure that can be consumed (configured and managed) through well-known and -documented APIs; that provides elastically scalable compute, storage, and networking (as much as possible); and that provides an easy way to broker additional supporting services via a service catalog,” Ciborowski says. 

Dealing with the increased complexity that containers bring

Lifecycle management for containers brings challenges, says elastic.io2's Zubairov. "When people decide to start using containers, they usually assume that this is the same as or at least similar to using virtual machines. In reality, containers differ significantly from virtual machines." As a result, many procedures, tricks, and tools considered best practices for virtual machine lifecycle management cannot be applied to containers. "IT Ops managers need to educate themselves and their teams on this matter in order to avoid costly issues," Zubairov says. 

Container structure, in which each layer is stacked on top of others below, also means that if you change anything in the first layer to update a container, you need to make appropriate changes in all the layers that follow, he says. "Plus you need to know what these changes should be, since due to the layered structure there can be hundreds, if not thousands, of different combinations of what needs to be changed in subsequent layers," he explains.

Add to this complexity the fact that the move to containers usually goes hand-in-hand with an architectural move from monolithic applications to more granular microservices, according to CyberArk's Kelly. “This is usually done to increase feature development speed by decoupling components within the application into fully independent services, deployed into their own containers,” says Kelly. Such complexity can also be cause for IT Ops to be cautious about adoption.

“While a monolithic application might be difficult to scale or maintain, it doesn’t have to worry about network reliability when calling internal components,” he says. “On the other hand, a microservices-based system will have to be coded to deal correctly with slow network latency, back pressure, partitions, interservice authentication, discovery, monitoring, and more.” The bad news: Basic container technology doesn’t offer much to help with these challenges. The good news: Service meshes (such as Istio) or container orchestration systems (such as Kubernetes) make this world a little less painful, Kelly says.

Bring on the containers

There is no getting around the fact that containers will challenge your IT Ops organization. But there's great value in being able to configure a container image once and then run it anywhere, on any platform, at any scale. Such platform agnosticity, as well as the virtues of enabling more consistent development, testing, and production environments and of supporting simpler updates, means that any challenges teams will face in adopting containers will be worth taking on.

Keep learning

Read more articles about: Enterprise ITIT Ops