Containers 2.0: Why unikernels will rock the cloud

Clouds have come to dominate the mindset of IT. The promises of business agility, maximized resource utilization, and flexible infrastructure have grabbed the imaginations of CIOs across the world. The opportunity to immediately adjust your infrastructure to the needs of your business is seen as a route to success.

Unfortunately, there is still one critical piece missing.

Despite all the attention given to cloud orchestration systems such as OpenStack, CloudStack, and OpenNebula, very little has been paid to the workloads placed into these clouds. Most clouds are loaded with software that is almost identical to what IT used in the days before cloud. We need workloads that are smaller, faster, and, most importantly, more secure than they used to be.

That's where unikernels come in. Unikernels are ultralight application images requiring minimal resources. You can start them almost instantaneously, and they reduce the attack surface for malicious hackers. They greatly improve the agility of your cloud services, while simultaneously raising the bar on security. But before exploring how unikernels work, let's consider why they are so critical for the future of computing.

Get 80-Page Report2016 World Quality Report - Latest trends in QA & testing

Slow, obese systems drag down clouds

I have been working with clouds since before “cloud” was coined as an industry term (one company I worked for used to refer to what we now call "cloud" as “agile infrastructure”). From the dawn of the cloud until now, the focus has been on orchestration, and for good reason. In the pre-cloud era, the data center was a static thing. Systems administrators knew which server was hosting which workload. Nothing moved of its own volition.

But in the cloud, workloads must move with the needs of the moment, so the focus has been on creating cloud orchestration systems such as OpenStack to convert the data center into an agile entity. The results have been dramatic, doing much to realize the dream of a cloud that quickly adapts to the needs of business.

Until recently, however, the industry has expended little effort to make the workloads that run in the cloud as agile as the cloud itself. Most workloads in the cloud look exactly like the machine images that used to occupy our static data centers. They have target applications layered on top of bulky, general-purpose operating systems such as Linux or Windows, which are designed to perform a wide spectrum of functions that go well beyond the needs of the target applications. These workload images require many gigabytes of storage and memory and can take a minute or more just to start up.

These fat, slow images bog down clouds. Clouds appear to have unlimited resources to users, but people in the data center know otherwise. The bigger the images, the more storage and host servers are needed. The more storage and servers you have, the more electricity you need to power and cool them, and the more data center space you'll need to house them. All this costs money—lots of money—and too much time gets wasted waiting for these fat images to be moved around in a supposedly agile cloud.

Security is the 800-pound gorilla

But even more significant than the size and speed of workloads is the issue of security. I was walking in New York City recently when I began looking at the signs of the many large corporations lining the avenue. Then I started mentally checking off which ones had made the news for security breaches in the past couple of years. It was quite a few. The rest—who knows? Maybe we only see the tip of the iceberg, but that tip is getting huge. Federal government sites, major banks, large retailers, and even major hospitals have publicly suffered at the hands of malicious hackers.

The security status quo is deficient. Security is frequently an afterthought that gets precariously tacked on after the workload is constructed. We need a new type of workload, one that incorporates improved security by design.

Why containers don't fit the bill

Containers have made considerable progress in this area. They help to readily transform many old-school workloads into much smaller, faster images that fit the concept of cloud. However, they still require bolt-on security by way of cgroups, namespaces, etc. These secure the beefy, multipurpose operating system that provides the basis for running those containers. These tools can do some excellent things to help improve security, but those capabilities are no longer good enough for systems facing the public Internet.

For example, we know that Security-Enhanced Linux  (SELinux) can do some excellent things to secure Linux servers, but we also know that a huge number of production servers have SELinux disabled—even those Linux distributions where SELinux is normally enabled by default! I long ago lost count of the number of times I have heard someone say, “My boss says to get this server up now. I don't have time to configure and test SELinux on it. I will get to it when I have time.” But there is never enough time to get back to it.

If you intend to defeat the onslaught of malicious hackers, you need more than bolt-on security; you need workloads that raise the level of security by default. That requires a new approach.

The unikernel approach

The concept behind the unikernel is simple: provide just enough software to power the desired application and nothing more. That's it. The unikernel is software minimalism in practice. Here's how it works.

In a traditional software stack, you start with a piece of application software that provides an essential service (such as a web server, a database server, etc.). But to make that software function, you place it on top of a generic, multipurpose operating system (such as Linux or Windows), which provides the necessary drivers and functions to make the software work. It's a model that the industry has been using for decades, back to when computers were frightfully expensive and one computer might have to power every division in an organization. Because of that, the software stack contains thousands of utilities that the application does not need, dozens of drivers for hardware that the application does not touch, and kernel support for all sorts of functions that your application doesn't use. You end up with gigabytes of disk space and memory consumed by functions that have no bearing on the application you want to run.

These traditional software stacks allow the application to draw on operating system functions as needed at runtime. But what if you built those functions in at compile time instead? That's what unikernels do.

Traditional software stacks let applications draw on operating system functions as needed at runtime. Unikernels let you build those functions in at compile time.

Unikernels rely on specialized compiler systems to combine application software and operating system support functions at compile time instead of runtime. The result is a single application image that contains everything the application needs to run. All drivers, all I/O routines, and all support library functions normally provided by an operating system are included in the executable. None of the thousands of utilities, unneeded drivers, unnecessary I/O routines, or unused support libraries are included. Instead, you have a single, lightweight virtual machine image that can simply boot and run the application without any other software being present. You simply place the resulting unikernel image into a virtual machine and boot it up.

Unikernels are smaller and faster 

By considering a smattering of results from a variety of unikernel systems, you can better understand the impact of this concept. MirageOS, one of the most established unikernel projects, has a working domain name server that compiles into just 449 KB. Yes, that's kilobytes—a memory size that many of us have not uttered in the current century. The project also has a web server that weighs in at 674 KB and an OpenFlow learning switch that tips the scales at just 393 KB. The LING unikernel project runs its project website as a unikernel, in about 25 MB of memory. The ClickOS project specializes in network function virtualization (NFV) devices and has produced software devices that can process over 5 million packets per second with a boot time of under 30 milliseconds using a memory footprint of less than 6 MB. A host server that currently holds a dozen full-size VMs could potentially support hundreds, or even thousands, of unikernels simultaneously.

But what about security?

The most crucial—and exciting—facet of unikernels is their impact on security. In a day when even the most professional IT organizations are feeling the bite of evildoers, unikernels change the rules by reducing the attack surface of an application to a fraction of its usual size. Think about it: If you are smart enough (and evil enough) to locate and exploit a bug in a unikernel application, what do you do next? You can't drop into a shell, because there isn't one. You can't call one of the thousands of utility programs to do something nefarious, because they don't exist. You can't even try to look at the password file, since it doesn't exist.

The security of unikernels is not total, but it is much more robust than the status quo.

Someone who cracks a unikernel must be smart enough to break the application, and smart enough to exploit that break without any tools at all. That requires superior skills.

How to debug a unikernel

Because unikernels have no other programs in the VM, there are no debugging utilities present. If you need to debug a failure in a unikernel, you'll have to pull logs and artifacts and reproduce the failure on a development system. This is possible because unikernel compilers behave differently, depending on whether you are working in a testing or production environment. In a testing environment, the executable is meant to be run on a general-purpose operating system that offers debugging utilities. In a production environment, the executable is a stand-alone VM image.

Some people claim that unikernels are unsuitable for use, since they cannot be debugged while in production. This is a red herring. I have spent over two decades working as a software services consultant for some of the largest companies and government agencies in the world, and I cannot recall a single time when I was allowed to debug a production system—ever. The standard method was to reproduce the error on a development box, patch the software, and redeploy it to production. That's exactly the same way you debug most unikernels.

Unikernels are a solution, not a panacea

For all their value, it is important to note that unikernels are not a panacea. For one thing, there are some applications that would be impossible to implement as unikernels without serious redesign. But you don't need every application to be a unikernel in order to reap the benefits of this technology. If you create unikernels out of most of the suitable candidates in your data center, particularly those that are web-facing, you can free up massive amounts of capacity that you can then reallocate for other tasks (such as those applications that aren't suitable for unikernels). The impact on security is potentially immense.

Want to know more about unikernels? You'll find plenty of information available at, or you can post your questions and comments here.

Get 80-Page Report2016 World Quality Report - Latest trends in QA & testing
Topics: DevOps