Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Hackers love Docker: Container catastrophe in 3, 2, 1...

Richi Jennings Your humble blogwatcher, dba RJA

The day we all feared would come has come. Docker and Kubernetes containers are revealed to be badly vulnerable—along with LXC, Mesos, and several other container flavors.

An easily exploited flaw means a container can escape its paper-thin walls and execute on the host system—as root. Time to audit your trust boundaries.

Happy Valentine’s Day, DevOps peeps. In this week’s Security Blogwatch, we drop everything and patch.

Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: V-Day chox zen.

K8S h8s runC vuln

What’s the craic? Steven J. Vaughan-Nichols sums all fears—Doomsday Docker security hole:

One of the great security fears about containers is that an attacker could infect a container with a malicious program, which could escape and attack the host system. … How bad is this? As bad as you can imagine.

Besides runC … the problem can also attack container systems using LXC [or] Apache Mesos. [So] if you're running any kind of containers, you need to patch ASAP.

This has the potential to seriously damage your systems.

So, uh, patch now, right? Christine Hall effects an agreement: [You’re fired—Ed.]

The container bug, CVE-2019-5736, [is in] runC, the underlying container runtime for Docker, containerd, Kubernetes, cri-o and other container software, which means that nearly everyone running containers is affected. [The] escalation of privileges includes root access, which is more than enough to ruin any sysadmin's day.

A patch for the vulnerability has already been released. … Container users shouldn't put-off applying the patch until their next scheduled maintenance update, however, as an in-the-wild exploit will probably come sooner rather than later.

Who discovered it? Here, Adam Iwaniuk and Borys Popławski be dragons:

Our goal was to compromise the host environment from inside a Docker container running in the default or hardened configuration. … We have achieved full code execution on the host, with all capabilities (i.e., … root access level), triggered by either running “docker exec” from the host on a compromised Docker container, [or] starting a malicious Docker image.

Despite Docker not being marketed as sandboxing software, its default setup is meant to secure host resources from being accessed by processes inside of a container. [But] it’s possible to disable all of [its] hardening mechanisms, [which] makes it possible to easily escape the container.

There are several mitigation possibilities when using an unpatched runc:
  1. Use Docker containers with SELinux enabled (--selinux-enabled). This prevents processes inside the container from overwriting the host docker-runc binary.
  2. Use read-only file system on the host, at least for storing the docker-runc binary.
  3. Use a low privileged user inside the container or a new user namespace with uid 0 mapped to that user (then that user should not have write access to runc binary on the host).

It’s like swimming upstream. Aleksa Sarai announces CVE-2019-5736:

I am one of the maintainers of runc. … We recently had a vulnerability reported which we have verified. [I] discovered that LXC was also vulnerable.

The vulnerability allows a malicious container to (with minimal user interaction) overwrite the host runc binary and thus gain root-level code execution on the host. … This vulnerability is *not* blocked by the default AppArmor policy, nor by the default SELinux policy [with] the "moby-engine" package … on Fedora.

I have been contacted by folks from Apache Mesos who said they were also vulnerable. … It is quite likely that most container runtimes are vulnerable to this flaw, unless they took very strange mitigations. … After some discussion with the systemd-nspawn folks, it appears that they aren't vulnerable.

Who else? Christian Brauner curses the Privileged Container:

I’ve been working on a fix for this issue over the last couple of weeks together with Aleksa. … I was interested in the issue for technical reasons and figuring out how to reliably fix it was quite fun (with a proper dose of pure hatred).

[You] could say a privileged container is a container that is running as root. However, this is … wrong. [It's] a container where the semantics for id 0 are the same inside and outside of the container. [All else being equal] because using LSMs, seccomp or any other security mechanism will not cause a change in the meaning of id 0 inside and outside the container. … The reason why I like to define privileged containers this way is that it also lets us handle edge cases.

Docker containers run as privileged containers by default. … What the --privileged flag does is to give you even more permissions. … One could say that such containers are almost “super-privileged”.

Privileged containers are not and cannot be root safe. … Running untrusted workloads in privileged containers is insane.

Let this recent CVE be another reminder that unprivileged containers need to be the default.

And this Anonymous Coward goes further:

Containers … are not a strong enough security boundary to safely isolate untrusted code. They never have been, and anybody that told you otherwise is either lying or clueless.

Containers are super convenient, and a great way to manage the deployment of your software, and you should use them -- just not to protect mixed-trust workloads … from each other. If you want to run code from sources that you don't trust, isolate it in a separate VM.

And this one points fingers:

It's not a strong enough security boundary, but it damn well should have been. There's no problem doing this in FreeBSD and Solaris.

Frankly, the Linux community should be embarrassed that such a fundamental system facility was implemented in a botched and useless manner.

Any other blame to game? Junta concludes unfettered code-reuse should be considered harmful:

My dread: when a project talking about downloads gives only 'docker pull', then almost universally the software is **** held together by duct tape they barely could sort out to get run once. The preceding fad was 'virtual appliance', where the same thing held: developers lacking the competency to properly design their software to work sanely in a well controlled environment.

Attempts to use those software have non-trivially more frequent incidents of falling over and no one on earth able to figure out what went wrong because they never unified how the software produces debug and they spew out in random places through third-party code the vendor doesn't really understand. Also they have a tendency to have some decrepit versions of software dating back to when they first did a 'docker pull' when they started development and never bothering to refresh dependencies.

Sure, there will be examples of well maintained software and people doing the right thing in dockerhub, but there are a lot of crappy things there too.

While we’re piling on the container criticism, DeVilla bandwagonifies:

Docker is a mess because it was originally developed in a way that served the interests of Docker Inc. The single local name space of images, the poor default implement of a remote registry, the ability to only search images in dockerhub.

It wasn't designed to support secure isolation. That was bolted on later and needs continual patching.

And another Anonymous Coward blames—to coin a phrase—fake news:

I've been reading many Docker vs. VM posts lately, and I'm astonished.

It's as if no one writing the articles have ever even utilized any modern hypervisor (much less aware of their actual limitations or constraints). … Most of the statements about overhead and constraints (and performance) of VMs … are just plain wrong. … If you read these, you would think traditional hypervisor are cumbersome inefficient and non-performant beasts, and Docker is some sort of holy grail.

It's just weird. Who's making this **** up??

However, DarkOx urges us not to throw the baby out with the bathwater:

Containers have a place. … I can upgrade each container (and easily revert to a … snapshot of it if things go wrong). … I can test and resolve any issues … one service at a time.

Containers solve that problem well. Oh and they let me do it all on a little low power ARM system; good luck doing that with full VMs.

On the other hand a lot of people are using containers to avoid patching and updating anything ever—and yes that is going to lead to terrible security problems. … In a commercial setting … you need a mature DevOps organization (i.e., not some guy who said hey this Docker thing is cool lets toss it in) where IT Security has input.

The moral of the story?

Don’t buy in to the myth that containers give you isolation. Check your privilege.

And finally …

Oddly Satisfying Valentine's Day Chocolate:

The making of:

You have been reading Security Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi or sbw@richi.uk. Ask your doctor before reading. Your mileage may vary. E&OE.

Image source: Martin Vorel (cc:0)

Keep learning

Read more articles about: SecurityInformation Security