Micro Focus is now part of OpenText. Learn more >

You are here

You are here

7 best practices for securing enterprise container environments

Chenxi Wang Founder & General Partner, Rain Capital

While working with developers to bring containers to production, Security Operations (SecOps) are often faced with conflicting requirements: agility and time-to-market from the dev side of the house versus visibility and control from the security side of the house. 

But there are best practices to help you ensure a certain level of security and control while enabling containers. Here are some tried-and-true methods from actual deployments across a customer base (Twistlock's) that is actively bringing containers to production.

In this first post of a two-part series, I cover image trust, vulnerability management, and hardening practices. 

1. Know and control the source and content of your images

Containers, at the end of the day, come from images. Your developers may build their own images or download an image from a third party. If your developers build their own images, that does not mean the images are built from scratch with custom code. It is almost always the case that your custom-built image is built on top of some base image, which is an existing image from a third party. For example, you may have an Apache layer on top of an Ubuntu base image—and sometimes a custom Node application on top of that.

Do you know from which source your developers got the third-party libraries or images? Do you know whether they have verified that the libraries or images they downloaded are authentic, up to date, and free of known vulnerabilities? Are your developers downloading them from unknown and potentially harmful sources? 

Most developers optimize for speed and convenience rather than trust and authenticity, and containers make it easy for them to quickly build, share, and deploy, which is a distinct risk if you don't have a good way to control where the images come from and what is contained in the image.

As a security best practice, the first thing you need to do as a SecOps professional is establish a policy and a means of control that you can ascertain and limit the source of images and libraries that go into your containers. More specifically, this means a) specifying a list of trusted sources for images and libraries—for example, you may have a specific publisher of images that you trust or specific registries that you had deemed trustworthy; and b) establishing points of control throughout your development and deployment workflows to enforce that code and images only from trusted sources can be used and deployed. An example of such a control could be a capability within your CI/CD pipeline and/or a gatekeeper function for your production hosts that checks the authenticity and validity of images. 

An open-source tool to note is Notary, a Docker project. Notary allows content publishers to sign the content they publish and content consumers to verify the authenticity of the content. For enterprise use, you can automate Notary functions and, more importantly, add runtime enforcement based on Notary verification results. 

2. Eradicate vulnerabilities before container deployment 

As mentioned, a container image is rarely built from scratch; a developer may grab a base image and other layers from third-party sources to construct an image. These libraries and base images may contain obsolete or vulnerable code, thereby putting your application at risk.

When you have upstream control, you can utilize code-level vulnerability analysis tools to eliminate vulnerabilities before code goes into an image. But with containers, likely you don't have complete upstream control, so you will need a way to subscribe to vulnerability information from upstream projects as well as processing code that your developers generate natively.

An added complexity is that you will need a vulnerability-scanning function that can parse container image formats, since your developers may pull down a whole image at once. Additionally, this vulnerability management function should incorporate seamlessly into the container build, share, and deployment workflow. As an example, you may want the vulnerability management function to integrate with your CI/CD pipeline tooling or with your own container registry.  

But detecting vulnerabilities is only the first step. You must be in control of what your users are actually deploying, otherwise vulnerable containers will end up on your production hosts. What this means is that you can't simply stop at vulnerability scanning—you must scan, manage fixes, and enforce vulnerability-based policies. An example of a vulnerability-based policy is to mandate that no image with a certain CVSS score level or higher can be deployed into production. To enforce these policies, you must go beyond scanners in development to runtime controls.

3. Hardening container images, daemons, and the host environment

When you run containers, it is imperative that you harden the host environment, the container daemon, and the images in order to reduce your runtime risk. 

For example, one of the hardening practices is to remove noncritical native services from the production host. That way you force access to the host to go through the containers, thereby centralizing control at the container daemon and removing the host from the attack surface.

Another example is to restrict the permissions to access the "/etc/docker" directory on the host to "755" or more restrictive. This is because "/etc/docker" contains certificates, key materials, and other sensitive files. Hence, it should only be writable by "root."  

The hardening practices should cover the whole stack—from the host to the daemon and to the containers themselves. The center for Internet Security (CIS) has published a consensus Docker benchmark, which is widely considered the most comprehensive configuration and hardening guidelines for environments that run Docker containers.

It is recommended that you follow and enforce the CIS hardening guidelines. An enterprise, however, may want to determine which subset of the CIS benchmark applies to it and automate the task that verifies that such hardening practices are indeed carried out. At the same time, you must enforce the policy to ensure that noncompliant containers and daemons are not deployed in your environment and that no containers are deployed onto noncompliant hosts. 

For SecOps, your responsibility is to establish the verification and enforcement tasks as part of your essential development and deployment processes and, in addition, ensure that these processes are carried out consistently across multiple clouds and multiple data centers. 

By following the best practices detailed here, you will be well under way to establishing a robust execution environment for your containerized applications. In Part 2, we will build on these practices here and expand to active threat protection for container runtime.

Share your thoughts (and best practices) in the comments below.

Image credit: Flickr

Keep learning

Read more articles about: Enterprise ITIT Ops