You are here

How containers affect software quality

public://pictures/Todd-DeCapua-CEO-DMC.png
Todd DeCapua, Technology leader, speaker & author, CSC

In all the breathless excitement around container technologies, one topic is consistently overlooked: quality. It's the 800-pound gorilla nobody seems to care about.

That's a problem for companies that want to leverage container technologies to accelerate their time to market. Among other things, containers pose major challenges in:

  • Developing applications concurrently
  • Managing infrastructure architecture and configuration
  • Understanding the overall functionality of your system
  • Predicting performance at scale
  • Securing the applications

[ Learn how to transform your IT with AIOps in TechBeacon's guide. Plus: Download the analyst paper on how AI is changing the role of IT. ]

The container phenomenon

Application developers have been flocking to Docker and other container tools because of the speed of delivery and portability container technology offers. Containers allow developers to bundle application components and dependencies into an image that can be run anywhere without change, from the developer's desktop all the way to a production system. With containers, developers can work on applications without worrying about the underlying host or issues like application placement, user permissions, and app permissions.

That's a great thing for at least a couple of reasons. First, isolating code to a single self-contained function makes updating it easier. Second, a microservices architecture makes it possible to take a large application development project and split it across small, agile development teams without the management overhead that comes with a monolithic application development process.

All these are good things. But what are you doing about quality as it relates to container technologies? What are you doing to ensure that your code is secure, performing and functioning the way it was architected, so it will work as designed when deployed into production for your end users?

There are multiple aspects you need to consider when talking about quality considerations in a container setting. Here are a few of them:

[ Enterprise Service Management brings innovation to the enterprise. Learn more in TechBeacon's new ESM guide. Plus: Get the 2019 Forrester Wave for ESM. ]

Developing applications concurrently

Containerized development involves concurrent releases by multiple teams. Typically, these teams all leverage a shared application architecture on the same or parallel code bases. They typically also use distributed and shared components and services that are not owned by that team or organization.

The issue with using containers and concurrency in application development is scalability. Mid-size to large enterprises can have anywhere from 1 to 30 or more application development teams working on a project. One or multiple teams might be working on several branches concurrently. Each of these could have one or more code bases, all with intertwined internal and external dependencies, some owned and some not.

Leveraging container technologies at scale requires unraveling the twisted and dependent application architecture across many levels. Often, this is not possible within composite application architectures. And when it is possible, the extra work runs counter to the business need to speed time to market and lower cost.

Managing infrastructure architecture and configuration

Now add to the mix a continuously changing infrastructure. Not only is the overall architecture in flux, but so are the configurations within components as the number of projects, products, partnerships, and programs grows.

This ever-changing dependency between application (code branches, builds, and internal and external apps) and infrastructure (components, architecture, and configurations) becomes a complex problem pretty quickly. How do you manage all the dependencies in a container environment?

Understanding full system functionality

For end users, usability and user experience are key, but it can be a challenge to satisfy their desires using container technologies. A container approach often limits the ability to have all end-to-end capabilities integrated on any given desktop or environment and can make it difficult to ensure that full functionality is being delivered throughout all systems.

One problem of dividing teams into small code bases glued together through common APIs or microservices is that it is significantly harder for development teams to understand the application as a whole—the dependencies that exist between containers, for example, or the cascading impact of a service failure. In a microservices environment, a failure in a single container can ripple across the entire application.

Developers need to know what that means as it relates to application functionality. For example, what if you can't log into a website because of a "login service" version change? Was that change in the service, the code, or the infrastructure? Maybe something in the middle tier changed because of a code branch or version change, but this has not been cascaded down to your application or infrastructure version. Maybe you simply do not have this system or service as part of your container. Are you confident that you know what is happening everywhere in your stack?

Consider that, on average, there are 38 services per application. This is truly complex as it relates to quality, especially if these services and code are out of sync with what is being worked on or deployed to production. What are you doing to address it?

Predicting performance at scale

Amid the container hype cycle, many developers don't think they need an environment to test application and infrastructure (and of course configuration) performance. Instead, many simply run within their containers and guesstimate how it would fare in production.

That's convenient but does not always replicate the real thing, as you can imagine, since the infrastructure is often much more powerful than that of a developer's laptop in production, not to mention the different infrastructure and application architectures and configurations. Suppose you have everything in a container on a laptop and want to see how it runs against 100 users. But the infrastructure that is testing it is a single laptop. Maybe it's a super laptop, but how can any test you run on it compare with what will happen when you have 180,000 users logging in per hour in a production environment involving a cluster of six with eight servers per cluster? The performance is going to be significantly different between the two environments.

Securing the application

There is also the big question of security. How do you apply all your security policies against containers when you are developing, testing, and deploying early on and throughout the application life cycle?

How do you identify vulnerabilities in container images, and how do you prevent them from being deployed? Can you identify a vulnerability in a container app and fix it without breaking the dependencies or workflow? How do you make sure that any images you download are secure and do not contain any vulnerabilities that could break your application?

Are containers really ready for the enterprise?

You have the complexities of application and infrastructure architectures in a continuous cycle, and you can't know with any level of certainty the functionality of your application. If it is going to work, you don't have a way to get accurate performance results, and you don't have anything from a security perspective because it is a completely different environment.

There is a place for containers within your development and testing stack, but these quality challenges need to be understood and accepted before you can leverage these technologies to drive the highest quality as fast as possible to your end user and maximize business value to your organization.

So from a quality perspective, what are you truly getting from container technology? Are you really getting DevOps by design?

What are you doing?

Docker has been around for three years now and has become more or less synonymous with container technology. The developer community has been all over the technology for some time, but it is just beginning to get into some mid-size and large organizations.

So how do you deal with all of the challenges associated with containers? How does it work when you are doing continuous integration and continuous deployment? How do you manage all the differences? How do you manage all the change and keep everything in order?

There is a lot of good that can come from container technology. But it is time to read the warning labels. Amid all the excitement and hype around containers, there has been no discussion about quality. It is not enough for a developer to say, "It works on my desktop, so it must work everywhere."

[ Learn how robotic process automation (RPA) can pay off—if you first tackle underlying problems. See TechBeacon's guide. Plus: Get the white paper on enterprise requirements. ]