Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Containerization: What IT Ops needs to know before deployment

Linda Dailey Paulson Freelance writer
Tractor hauling a wagon on the road

DevOps needs to be faster and more responsive to run at the speed of business. But meeting customer demand requires increased IT Ops resources, which have to be promptly delivered to ensure that departments don't fall back on shadow IT. This causes more problems than it resolves.

Traditional IT is slow because capacity, databases, and security must be provisioned through typically disconnected processes. Dev teams aren't happy when IT takes days or weeks to deliver. A new architectural approach and the use of containers may finally offer something close to the operating model that development teams have pioneered.

A containerized architecture enables IT to scale faster, exploit APIs, and increase automation to deploy software, freeing humans to do value-added tasks that technology cannot deliver.

To successfully adopt this approach requires organizations to create and nurture the appropriate IT cultural environment in which the use of containers and microservices can be rooted. Once established, these technologies allow IT Ops to become more flexible and responsive to an ever-changing array of organizational needs.

We asked several experts to weigh in on what changes are needed before IT Ops can adopt this new deployment model.


Containerized architecture requires a cultural shift

Moving an organization to some combination of microservices and containers requires a cultural change. Technology obstacles may be present, but they are not as significant a barrier to adoption as are the cultural challenges.

An organization’s culture plays “a huge role” in such a move, says David Linthicum, senior vice president at Cloud Technology Partners.

"Either they accept new technology into the process, or they actively push back on it. There does not seem to be a middle ground. You need to understand how to change those cultures before you implement the technology."
-David Linthicum

Support is especially needed from executives who can vocally back such changes, says Rahul Tripathi, vice president of product management for IT operations management at Hewlett Packard Enterprise Software. “Culture,” he says, “will follow the technological shift.”

The other advantage associated with a steady, slow adoption is that as others on the team and in the organization—some of whom may have initially expressed skepticism—start seeing benefits, the adoption process will become less of a push and more of a pull, less a frantic wrestling match and more an elegant dance.

Moving toward loosely coupled architectures

Many organizations' significant existing investments in virtual machine (VM) technology keep them paying the so-called "VM tax” fiscal quarter after quarter, rather than considering containers or other loosely coupled architectures that would allow the organizations to rapidly change their applications.

“What’s limiting most organizations in moving to containers and microservices is really the cost of doing so,” says Linthicum. “They certainly have the interest, and many in the organization are new-tech fanboys. But if the resources are not there, it won’t happen until there is a compelling event.”

That event could be a competitor that is using containers and leveraging that technology to gain market share, thus spurring your organization to move forward.

It’s not simply significant investments keeping organizations locked into VM technology. It’s recognizing that this is a challenge worth solving, says Tripathi. Otherwise, the painful problems will persist, and you run the risk of competitors leaping ahead of you.

What to containerize, what to leave alone?

Perhaps the most useful piece of information experts have shared is the reassurance that making a move from VMs to containers is not an all-or-nothing venture. Some apps should go to containers, but some should not. Here are a few recommendations:

  1. Look at your existing applications and assess the benefits associated with such a move, suggests Linthicum. When the amount of work and money required are factored in and balanced against the business case, such a transition may not make sense. Without a clear business case, an app should not be containerized. Besides, says Tripathi, there is no benefit you can derive from moving all your legacy application into containers. Many applications will work fine as intended, right where they are.
  2. Consider security, which is frequently a wrench thrown into the works, thus inhibiting change. It’s truly a non-issue, in Tripathi’s view. Containers are no less secure than existing environments. “When anything new is presented, there will be some who will avoid it. Meanwhile, tech enthusiasts and visionaries will show their peers the benefits of adoption, and the laggards will always be present.”
  3. As Tripathi notes: “As you start creating microservices, you can keep data separate and stateless, which will allow you to scale it out over time. If you have a legacy app that is continuing to do what it’s doing with no benefit associated with moving to microservices, why go through the pain?”
  4. You can run containers on virtual machines if you don’t want too much change. Once you’re comfortable with those initial steps, then you can decide which applications can run on bare metal, which should remain in VMs, and which are suited to run directly inside a container.

Bear in mind the container advantages

There are some compelling reasons for making the transition from VMs to containers. The gain associated with transitioning IT operations to containers and microservices is that you can reduce your cost of operations and have a better model for working with your applications across the enterprise, including into the cloud, says Tripathi.

Containers simplify and reduce the costs associated with using VMs. Existing investments can be leveraged by gradually containerizing. Start by using containers with new, non-mission-critical applications, which will provide you and your team with the opportunity to create and deploy without sinking into the self-defeating, anxiety-inducing mantra “If something goes wrong, I’m toast.” You can deploy, observe, collect data, refine, rinse, and repeat until you’re satisfied that application is working.

The new architecture, all wrapped up

Monolithic architectures make it difficult to rapidly update or upgrade critical functions. In moving from these older structures into microservices and containers, the issue is how the service or application is wrapped, says Srikanth Natarajan, head of technology strategy for HPE Software’s IT Ops business.

“Containers and microservices are not one and the same. Containers can be used without microservices; however, they work together well.” 
—Srikanth Natarajan, HPE Software

It’s also worth considering that no single vendor owns container technology, so there is no risk of being locked in to a single platform by deciding to adopt containerization. “Aligning applications to run in containers is not as big a leap as you may have initially imagined,” says Natarajan.

If you need to scale out and add more services, making the move becomes a benefits-versus-technology question. As you scale, you can get more instances. By wrapping them in a container, applications written in different languages can be brought together more easily.

Best practices emerging

Natarajan warns against what he calls “microservices sprawl.” “If you isolate 500 services, guess what: Unless you provide an easy way for users to be aware of these pieces and provide them with a way to manage them, it’s going to be a nightmare,” he says. “You still need to know what’s going on and have good tooling around it. You can’t introduce things without knowing how you’re going to manage them.”

There are not yet industry best practices for working with containers and microservices, says Natarajan. “We are still working through the details.” Until more best practices emerge, IT ops teams can use common sense for isolating hosts or deploying applications into production. They need to ask, “What are the scanning approaches used? Is it using the proper, secure libraries and correct tooling?”

“No situation is identical,” he adds. “Make a choice based on your tolerance level. Evaluate your security posture and decide what to do. And, again, there’s no need to move things that aren’t relevant.”

“This is a fundamental change that must occur systemically, meaning that all parts of the process must be updated,” says Linthicum. Automation will, for example, replace processes that were once manual. “Microservices, containers, and cloud computing are just part of the mix.”

The idea is to make gradual, thoughtful change. Linthicum suggests that teams implement the new processes in steps, over time, including the use of containers and microservices. "Automate 20% of the development and operations processes one year, 30% the next, and perhaps 50% the year after that," he says. "It’s slow for sure, but this will ensure that the new skills, processes, and tools are successful.”

How will containers serve your IT operations?

When considering a transition into containers, there are some important questions to ask to determine whether it will be a fit for your organization. This starts with whether you can afford to make such a move. If you can’t afford to transform the ops portion of IT to containers,” advises Linthicum, “don’t try it.”

If the in-house talent is ill-equipped for such a transition, new talent will be needed. Whether hiring or training, either will be an added cost. Training will extend the process since, as Linthicum says, “this is complex technology that takes some time to master.”

Then you will need to assess whether you have a compelling business case. Remember, to succeed your organization needs the following:

  • A culture in which container and microservices use can thrive, supported by executives.
  • A clear business case for containerizing all but your legacy applications.
  • A systemic plan for the transition based on solid common sense, one that both considers and updates the entire infrastructure.
  • Staffers in place who are properly trained to work with microservices and containers as well as the tools and technologies supporting such a shift.
  • A clear management strategy that also embraces a gradual, thoughtful change process.

Ironically, increasing your organization’s IT ops speed and responsiveness isn’t a project you need to dive into at warp speeds to succeed. Cultivating and developing an environment in which containers and microservices can be firmly rooted is the first step toward creating flexible, responsive IT Ops.


Keep learning

Read more articles about: Enterprise ITIT Ops