Shipping container door locked

How to secure containers: Actions every enterprise should take

While working with developers to bring containers to production, security operations (SecOps) often faces conflicting requirements: They must balance agility and time-to-market demands from the development side of the house against the visibility and control mandated on the security side. 

Fortunately, there are best practices you can fall back on to help ensure that you have an acceptable level of security and control in place when enabling containers. These are tried-and-true methods used by customers who are actively bringing containers to production.

2016 State of DevOps Report

Previously I discussed three of my seven recommended best practices for securing enterprise container environments, including:

  • Proactively eliminating vulnerabilities before you deploy containers
  • Asserting control over the source and content of images, and
  • Hardening your container images, daemons, and your host environment .

This time, I wrap up with integrating security into CI/CD tooling, enforcing role-based access control, automating runtime threat-detection and -defense, as well as performing regular security audits.

Integrate security into your CI/CD tooling

If you use containers, you are probably using continuous integration (CI) or continuous delivery (CD) pipeline tools such as Jenkins, TeamCity, or CircleCI.

One of the best places to detect and fix security vulnerabilities and configuration errors is as part of your CI/CD workflow. When you are building a new container image, you use the CI tool to initiate a security scan. The scan results are then imported into the native CI tool's console.

Using CI tooling for security is not a novel concept, but it is a developer-friendly way to incorporate security into your development lifecycle. But note that security scanning, while important, does not take away from the importance of writing secure code and having a secure development lifecycle practice in the first place. 

Enforce proper role segregation and access control for your container production environment

Many organizations that use containers in production have a hard time enforcing fine-grained, role-based policies, which are used to regulate user access to container APIs. This is in large part due to the fact that the most popular container platform, Docker, has no native, fine-grained access control. Once you have access to one Docker command, you have access to all of them.

That means your developers, testers, and operations personnel can have access to your production servers, and there is no way to limit what they can and cannot do. That's not a very comforting situation for most enterprises.

You need a policy layer that can express constraints, such as dictating that developers can deploy containers (“docker –run”) but not list them (“docker –ps”); or that testers can deploy and delete containers, but not change the Docker daemon.  This policy layer should integrate with organization identity directories to allow role-based access control.

To control access to a small number of Docker hosts, you can build your own authorization plugin and add that to your Docker daemon configuration. But to enforce role-based access control at scale in an enterprise environment, you need a management layer that enforces policies, integrates with enterprise identity management systems, and produces user access audit trails.

Automate anomaly detection and threat defense in the container runtime

Containers are minimal, declarative, and immutable. These characteristics mean that it is possible to build a reliable baseline for the containerized application. By using this baseline at runtime, you can detect possible anomalies and active threats targeting the application.

However, building the baseline is not a trivial task, especially when you have many containers spinning up and down dynamically.

The only way to do this is to automate the baselining process, including determining which ports are open in runtime, which processes will be spawned, which set of system calls will be initiated, and so on. This level of automation is possible with containerized applications because the nature of being declarative and immutable. You can determine runtime characteristics statically and be assured that they do not change throughout the lifetime of the container.

Once you've built the baseline, you can use capabilities like secure computing mode (SecComp) to detect, for instance, syscall anomalies and block calls that do not fit the baseline. Central management of SecComp policies and the baseline profiles are also extremely important here, and this will be the topic of a future article.

Perform regular security audits to prevent image and container sprawl

Containers are flexible and easy to use. Anytime you want to change, you just make a new image. But this flexibility brings with it the risk of image and container sprawl. It's easy to end up with too many instances of images or containers, some of which may be obsolete and vulnerable. If you don’t manage them carefully, vulnerable images or containers could be put into production, putting your business at risk.

Perform regular audits to identify unused, obsolete containers and image,  and eliminate them from your systems. So how do you do that?

You can use commands such as Docker ps, and Docker images to list the containers or the images on the host. You can also use Docker logs to inspect which containers have been used recently, and so on.  Other commercially available tools offer automatic reporting and auditing capabilities. Using these tools, you can easily see if sprawl starts to consume too many system resources, and if so, eliminate unused images and containers from your registries and hosts.

Nothing can ensure total security, but if you follow these 7 best practices you can feel confident about deploying containers for production use in the enterprise. Do you have additional suggestions? Feel free to post your opinions and suggestions below.

2016 State of DevOps Report