Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Kubernetes heralds the cloud-native era

Jason Bloomberg President, Intellyx

If you haven't been paying attention to the world of enterprise IT infrastructure, you may have missed the sudden rise of Kubernetes to a position of absolute domination.

It seems that containers themselves are still wet behind the years. But at the Cloud Native Computing Foundation's KubeCon + CloudNativeCon, held in Barcelona, Spain, last month, it was patently obvious that containers are here to stay. Kubernetes has handily won the container orchestrator wars.

Such rapid dominance is unusual. Gray-hairs like me will recall the Internet protocol wars of the early '90s, when the battles among contenders such as NetWare and Token Ring dragged on for years before TCP/IP finally won out.

And let us not forget the UNIX wars of the dot-com era, as vendors positioned one flavor over another until eventually the open-source dark horse, Linux, surprisingly came to dominate.

The main reason TCP/IP, Linux, and now Kubernetes won their respective battles is the fact that widespread agreement on foundational infrastructure technology is good for everyone. But the business advantages of picking a winner don't explain the remarkable velocity that Kubernetes exhibited on the way to the container orchestrator brass ring.

Here's how Kubernetes is heralding a new era in systems architecture: cloud-native computing.

A happy convergence

We can attribute kubernetes' rapid ascent, in fact, to a confluence of trends. Perhaps the most predictable of these is the maturation of the public cloud—not simply the market dominance of the big cloud players, but the widespread acceptance and understanding of core cloud best practices. These include horizontal scalability, resilience, and self-service configurability via declarative representations and APIs.

The second trend that contributed to Kubernetes' victory: DevOps. There are, in fact, two sides to DevOps: First, the organizational transformation as technical teams learn better ways to collaborate in order to deliver and run better software, faster than previously possible.

The second side to DevOps is a broad set of tooling that automates many of the tasks that app dev and ops teams must conduct. This is tooling that itself participates in the same API-centric, declarative configurability that it inherits from the cloud.

[ Partner resource: Subscribe to Intellix's Cortex and Brain Candy Newsletters ]

Cloud-native as new architectural model

Bridging the maturation of cloud best practices and the dual roles of DevOps is perhaps the most important trend of all: cloud-native architecture. Cloud-native architecture builds on both cloud and DevOps best practices, taking them beyond the cloud itself to all of enterprise IT.

As it turns out, the best way to get started with cloud-native architecture happens to be implementing Kubernetes, although cloud-native covers the gamut from traditional virtualization to containers to serverless computing.

In fact, cloud-native is more than an architectural approach. It represents a lens through which we can see the entirety of enterprise IT in a new light. For this reason, I consider it to be a new architectural paradigm.

The precursors to cloud-native architecture

Cloud-native architecture didn’t spring forth fully formed out of nothing. Many architectural trends that came before helped teach the lessons the industry needed to learn to make cloud-native a reality.

In the 2000s organizations have deployed service-oriented architecture (SOA), whose implementation typically depended on sophisticated middleware. These enterprise service buses (ESBs) handled a variety of tasks, including integration, routing, data transformation, security, and more, while exposing application functionality typically as web services.

SOA could therefore expose lightweight, language-independent service endpoints by shifting the intelligence to the middleware. This is a pattern we now like to call "smart pipes, dumb endpoints."

With the rise of the cloud transforming the role and nature of middleware, coupled with the rise of containers and microservices, SOA eventually gave way to microservices architecture.

Unlike web services that were little more than "dumb" XML-based endpoints, microservices are cohesive, parsimonious units of execution. They're little packages of goodness that do only one or two things, but do them well.

In common parlance, we refer to microservices architecture as "smart endpoints, dumb pipes." Microservices are their own mini-programs, with all the smarts we can cram into them. But to integrate them, we typically use nothing more intelligent than HTTP-based RESTful interactions or lightweight, open-source queuing technology.

Beyond 'smart endpoints, dumb pipes'

Replacing ESBs with "dumb pipes" made sense in the context of the shift from SOA's on-premises context to the cloud-centric world of microservices architecture. But implementation, scalability, and agility challenges remained.

The shortcomings of microservices architecture provided the perfect breeding ground for Kubernetes. In the Kubernetes-fueled, cloud-native architecture paradigm, we have "smart endpoints, smart service meshes."

Service meshes introduce a new approach to integrating microservices endpoints that is entirely cloud-native. Service meshes like the open-source Istio (along with its counterpart, the Envoy service proxy) also enable the discoverability and observability of containers and their microservices.

As a result, service meshes in conjunction with Kubernetes allow the full dynamic and ephemeral nature of containers to support core enterprise concerns of security, management, and integration. These benefits of ESBs from the SOA days are now brought forward to a fully cloud-native architectural paradigm.

What cloud-native architectures are missing

Ironically, the best way to understand the paradigm-shifting power of cloud-native architecture is to highlight what's absent from it: Cloud-native is codeless, stateless, and trustless.

I don't mean to say that you don't have to deal with state information or write code, and you can certainly trust some things. Rather, these three "-lesses" characterize core cloud-native principles.

By codeless I mean that Kubernetes is configurable and extensible, but there's no call for it being customizable. Operators handle configuration via YAML files (among other declarative techniques), giving vendors plenty of opportunity to build user-friendly configuration tooling. Even the various "flavors" of Kubernetes—and there are several—all share a single codebase.

Containers are also inherently stateless, a necessary side effect of their inherent ephemerality. After all, you wouldn't want to store data in one if it could disappear at a moment's notice.

State management

Kubernetes must handle state information—both persistent data in databases and file systems, as well as more transient (but still persistent) application state in caches.

To accomplish such state management in a stateless environment, Kubernetes follows cloud-native architectural principles by abstracting storage via codeless principles and exposing such stateful resources by way of APIs. This approach allows for whatever availability and resilience the organization requires from its persistence tier, without requiring the containers themselves to be stateful.

The third of the "-lesses"—trustlessness—is an essential characteristic of modern cybersecurity. You can no longer rely upon perimeter security to provide trusted environments. Instead, you must assume that all parts of are network are untrusted, and every endpoint must establish its own trust.

You shouldn't be surprised that Kubernetes calls for trustless interactions. Microservices endpoints are dynamic, and service meshes abstract them, so it's essential for such abstracted endpoints to take care of their own security. Trustlessness, in fact, is one of the main reasons why service meshes are so important to cloud-native architectures.

Your role in the Kubernetes universe

Cloud-native architectures leverage cloud and DevOps best practices to deliver codeless, stateless, and trustless infrastructure that supports the full breadth of modern enterprise infrastructure requirements. Kubernetes is at the center of the story. It's no wonder it has become the dominant central technology to the cloud-native architecture paradigm.

Infrastructure engineers should understand the importance of architecture to the Kubernetes story. Without it, the entire Kubernetes landscape has the appearance of a mélange of miscellaneous projects and components.

IT and business executives need not concern themselves with the trees, but must certainly understand the forest that is cloud-native architecture. Enterprise IT is undergoing a top-to-bottom transformation, and leaders won't be able to understand the challenges of digital transformation unless they properly support the bedrock upon which such a transformation rests.

And if you're an architect, your role is as important as ever—perhaps even more so. The challenge for you is coordinating all the architecture efforts in your organization. Cloud-native architecture is essentially infrastructure architecture. But application, solution, and enterprise architecture must all work together for your organization to achieve success with cloud-native architecture in today's digital era.

The Cloud Native Computing Foundation partially covered Jason Bloomberg’s expenses at KubeCon + CloudNativeCon, a standard industry practice.

Keep learning

Read more articles about: Enterprise ITCloud