Micro Focus is now part of OpenText. Learn more >

You are here

You are here

The 8 best open-source tools for building microservice apps

public://pictures/Peter.Wayner.jpg
Peter Wayner Freelance writer
Always open
 

Any enterprise IT group that wants to become the business driver that the company wants it to be is going to need an open source–first strategy, for two reasons. 

First, the licensing models of proprietary software are still tied to the perpetual, per-server license fee. The problem here is that modern apps are distributed and must deal with erratic workloads and so need application resources to start and stop constantly. Paying a per-server license fee kills that approach. While you must do your homework and understand the ins and outs of the various open-source licensing models, this approach works better.

Second, using proprietary software ties the user’s innovation cycle to that of the vendor, completely negating your ability to build your own functionality to address your unique needs.

 

That's why any IT organization that doesn’t have open source at the center of its technology strategy won’t be able to keep up in a “software is eating the world” marketplace. And as IT organizations begin to move to new application architectures, the reasons to use open-source components will become even more apparent. Microservice applications feature dozens, or even hundreds, of separate component execution environments, and that makes the proprietary software approach even less practical.

So, if your IT organization is implementing a microservices architecture, what are the best open-source components to use? You'll need an operating system, container technology, a scheduler, and a monitoring tool. Here are the best open-source options to consider for each.

Operating system: Choose your micro-Linux

While all Linux distributions support containers (addressed below), which one would be a good choice as the foundation of your application? It might seem that Red Hat or Ubuntu would be the logical options, but here’s something to consider: Both are full-featured OSes that carry lots of functionality you don't need to run in containers. Besides being inefficient (unnecessary functionality uses up more disk and memory and may impact system performance as well), running an operating system with unnecessary features presents more attack surface to hackers.

To address this concern, so-called micro distributions that offer only the required functionality to support containers have come to market. CoreOS, an early offering of this type, focuses on security. But other Linux providers have since developed container-specialized offerings as well. Red Hat has Atomic, a stripped-down variant of its flagship RHEL product. Canonical, the parent company of Ubuntu, has not gone the smaller-Linux route but has included technology called LXD, a container-oriented hypervisor, with its core product. Even VMware has gotten into the small-Linux provider space with its Photon offering. It's optimized to run containers, like Atomic, but it's tuned to operate on top of VMware’s vSphere hypervisor.

Containers: Docker all the way

All that Linux activity makes clear that containers are a big deal. While containers are by no means ubiquitous in corporate IT environments, the direction is clear—containers, a lightweight alternative to virtual machines, are the execution choice of the future. This means you can devote fewer server resources to running the execution environment and more to executing your application code. In other words, using containers means more value-creating processing will go on, since applications are where true user value is created.

When it comes to open-source containers, there’s really only one choice: Docker. It has been around for only a few short years, but in that time Docker has come to represent the future of applications. Docker and microservices are a natural match.

Microservices represent the logical extension of application partitioning and adjusting application resources to load via horizontal scaling. To enable this approach, your execution environments need to instantiate and begin operating quickly. Otherwise, your application may require additional resources to address a spiky load, but due to extended instantiation times, it may be unable to support that load. This results in poor performance, poor response times, unhappy users, and, inevitably, complaints to IT personnel. Fortunately, Docker can bring application components into service within milliseconds, making it possible to rapidly adjust resources in response to demand.

Schedulers: Swarm versus Kubernetes

Micro-Linux forms the foundation of your microservices application, and you've decided to use containers, which offer the best option for an application execution environment. But how does the container get placed on Linux? Remember, in a microservices environment, containers come and go like a Minnesota summer. Each server may run one container, or dozens of them. Containers require a set of supporting services, such as an API server, as well as security measures. Shifting all of those containers on and off a server, making sure they’re connected properly, tracking them so that application upgrades can be rolled out seamlessly—all of that is complicated.

Which leads to container schedulers, sometimes referred to as orchestrators. Many IT organizations are so enthusiastic about containers that they jump right into deployment, putting container-based applications into production. Shortly thereafter, they realize they need something to keep track of their application resources and ensure that they’re always running properly. Then they go on a frantic search for a scheduler.

There are many schedulers around, but for most IT organizations, the choice comes down to one of two alternatives: Swarm or Kubernetes. The former comes from Docker, and one might think it would represent the best choice in orchestration. However, Kubernetes is very popular and appears to have built a larger ecosystem of vendors that have engaged with it. For example, two popular application frameworks, Red Hat’s OpenShift and Apprenda, both use Kubernetes to orchestrate their framework deployments.

Kubernetes originally emanated from Google and reflects the company’s long use of containers internally. Inspired by Google’s internal container management tool, Kubernetes is now directed by the Cloud Native Computing Foundation.

Which scheduler you choose is up to you, but choose one. Container-based application topologies can get complex—and chaotic—a lot faster than most people expect.

Monitoring: Go with Prometheus

One final building block for a microservice application remains: How do you monitor it? It’s critical to see exactly what is going on in an application at any given time, and that's even more important in a highly distributed topology. Unfortunately, it’s also more difficult. Most traditional monitoring systems were designed for static application environments with a low number of nodes. Neither assumption is appropriate in a microservices environment. Many traditional monitoring systems have been modified in an attempt to better address microservice monitoring requirements, but they fail to cut it in complex real-world environments.

Fortunately, there is a good open-source solution at hand: Prometheus. Designed by an ex-Google engineer (who has since returned to Google) and inspired by the internal tools that Google uses to monitor its container environments, Prometheus was designed from the ground up for dynamic, distributed application topologies made up of large numbers of nodes.

Prometheus was also designed for operators who can easily be overwhelmed by surfeits of information. It uses a convenient, graphical interface to aid in visualizing monitoring information and supports time-based tracking so that anomalous patterns can be detected. And Prometheus' query language makes it easy to gather germane monitoring information quickly.

Prometheus works well with Kubernetes and has found a home with the Cloud Native Computing Foundation. The joint home isn't sufficient reason in itself to use both, but it’s worth considering, given how challenging integrating IT infrastructure tools can be.

Open source is the only way forward

There you have it. A whirlwind tour through the open-source way to implement microservice applications. If you take one thing away from this article, let it be this: If you’re considering migrating to a microservice application architecture—and you should be—don’t consider anything other than open-source solutions. They’re the only hope for tools that will evolve as rapidly as your needs will, and they’re the only cost-effective way to deliver the components required to deliver microservice applications.

 

Image credit: Flickr

Keep learning

Read more articles about: Enterprise ITIT Ops