Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Building serverless- and container-based applications: 7 trends to watch

public://webform/writeforus/profile-pictures/croppedprofilepix.jpg
Dean Hallman Founder and CTO, WireSoft
 

Serverless technology and container orchestration—especially via Docker and Kubernetes—are transforming cloud computing. And the fun is just beginning, since the impact of these technologies will likely reverberate through the industry for years to come. But while many IT operations managers are talking about individual technologies and announcements, few recognize the underlying shift taking place and how it could affect your job as an IT Ops management professional.

It's important to read between the lines of the product announcements. The practical insights below should help you navigate the rapidly evolving landscape of cloud computing.

There are seven key trends in cloud computing wrought by Amazon Web Services' (AWS) Lambda, which Kubernetes is beginning to leverage and which your company should at least pay attention to, if not incorporate into its own roadmap. (AWS's Lambda lets you run code without having to provision or manage servers.)

1. The cloud is evolving from operating system aggregator to the new OS

In the early years of AWS, Amazon spent most of its AWS R&D cycles turning hardware procurement and management into a software problem; the introductions of Amazon's S3 object storage and Elastic Compute Cloud (EC2) are the clearest examples of that. As a foundation for its infrastructure-as-a-service (IaaS) offering, AWS built out a cloud equivalent of the traditional hardware abstraction layer (HAL).  

Having completed that, AWS progressed up the stack, focusing next on building out a complement of quality-of-service (QoS) components and associated APIs, including queuing and notification systems, a system registry in DynamoDB, a scheduling subsystem, system-wide logging, and more.  

Finally, in 2015 AWS Lambda reached the general-availability status. Amazon filled in the final piece of the world’s first distributed cloud OS, complete with a HAL, QoS components, a software development kit, and a runtime. And while an operating system textbook from the 1990s might not characterize AWS as an OS, or EC2 and S3 as part of a HAL, the definitions have changed.  

What makes AWS Lambda (and its counterparts on other cloud platforms) so significant is that it has transformed how people think about the cloud's value proposition and whom that value proposition targets. In positioning itself to capitalize on big data, the Internet of Things (IoT), machine learning, and other changes rippling through the industry, Amazon recognized the need to graduate its wares from platform-as-a-service (PaaS) to toolchain-as-a-service (TCaaS) offerings. 

Figure 1: The distributed cloud operating system, function-as-a-service (FaaS) and toolchain-as-a-service (TCaaS) platform. Source for all figures: Dean Hallman, All Things Open Talk, 2016.  

PaaS offerings such as Amazon's Elastic Beanstalk are tightly coupled, vertically integrated software stacks provided as turnkey foundations on which to build applications. TCaaS offerings are loosely coupled, horizontally integrated cloud applications that codify how and when various independent software systems should collaborate to solve some business need. (Integration platform as a service, or IPaaS, is related to TCaaS, but the latter is generally lower-level and more developer-oriented.) AWS Lambda and the events that trigger it arose as a means of building the glue logic to lace big data, the IoT, and future toolchains together (see Figure 1).

With the arrival of cloud-based toolchains and the requisite infrastructure to support them, AWS is evolving from an aggregator of sub-operating systems to a provider of the OS itself. Instead of building and deploying applications in the cloud, you can now build and deploy applications on the cloud. Other cloud vendors have since introduced similar function-as-a-service (FaaS) offerings, thereby extending this trend across a range of cloud providers.

The implications of this shift in value proposition and target audience aren't just tactical; they're strategic. Initially, most companies did not recognize the strategic implications, and they have continued to work with the cloud, both structurally and procedurally, in traditional ways.

2. Silos are breaking down, and devs are retooling

With the arrival of the cloud as OS, the siloed DevOps model (explained below) that was prevalent up until 2015 is now outmoded. In the traditional model, companies organize themselves into engineering, infrastructure, and QA departments, and the infrastructure team is responsible for dealing with most things related to AWS, Microsoft's Azure, Google Compute Cloud, and so on.

For example, infrastructure teams are usually responsible for building out the provisioning needed for Amazon's Virtual Private Cloud, EC2 instances, Amazon's Elastic Block Store volumes, and similar resource requirements.

Once provisioned, those resources pass over to the engineering team, which uses them as deployment targets. Engineering has historically been less knowledgeable about, and less concerned with, AWS's product portfolio, since AWS's products historically comprised components that turned systems administration functions into software problems.

DevOps was represented, at least in part, as the software that a system administrator wrote to provision the resources needed to meet engineering's requirements. Administrators often used infrastructure-as-code solutions such as Amazon's CloudFormation, Red Hat's Ansible, or HashiCorp's Terraform.

Prior to AWS Lambda, the relatively clean divisions between engineering and infrastructure, and the provision-and-pass model that governed their interaction, remained largely intact and defensible. But that all changed in 2015, when the cloud redefined itself as an OS.

The siloed DevOps model

Ironically, it was the siloed DevOps model (explained below), which was entrenched in the enterprise in 2015, that contributed to the slow uptake most enterprises experienced in recognizing the significance of AWS Lambda. That was because many professionals who were plugged into AWS product announcements and portfolio expansions were primarily concerned with infrastructure-related value-add. 

But here, for the first time, AWS Lambda was targeting an audience primarily of developers, who were generally less attuned to Amazon's ever-expanding product family. This presented a dilemma for teams using the siloed DevOps model. 

Here's what I mean by "siloed DevOps."

On the one hand, you had the infrastructure team, with access to many pieces of core technology, tasked with both the provisioning of production resources and guarding those resources against service disruptions over time.

On the other hand, you had developers who could not fully embrace and understand a new OS without having unfettered access to explore its capabilities. Imagine building a Mac or Windows app without knowing that the mouse event dispatch loop was a built-in core feature; you'd likely spend time building a less ideal solution.

In other words, developers building software for an OS that they were not allowed or inclined to explore fully would spin their wheels reinventing core features of the OS they didn't know were available out of the box.

To remedy this, developers should stop regarding AWS and other cloud platforms as the domain of the infrastructure team, and employers should stop restricting developers from having the access they require to educate themselves.

The cloud as OS yields a new class of solutions

Once developers stop regarding the cloud as infrastructure and start regarding it as an OS with capabilities relevant to their jobs, a new class of solutions presents itself.

For example, in 2015 I was working on a big-data platform when a new requirement arose. The clients could receive new or corrected data that arrived late—days or weeks after the fact. To fix this, they wanted to replace their manual processes with an automatic rewind that would recognize late arrivals, ingest the data, and recalculate any reports that were affected.

Initially, the team was considering traditional solutions to this problem, such as provisioning an EC2 instance running a REST API, allowing a sender to register late-arriving data, and scheduling the requisite processing. But upon learning of AWS Lambda, we decided to write a single Lambda that was listening for S3 write events within our data onboarding keyspace.

The cloud was expanding the set of solutions I could contemplate as a developer. The cloud was expanding its role from being the easel to becoming also the paintbrush and the color palette. This was the moment when I first recognized that the siloed DevOps model was breaking down and that AWS Lambda was forcing another expansion in the meaning of the term DevOps.

3. DevOps is being redefined—again 

The answer to the siloed DevOps dilemma is still taking shape, but some themes are starting to emerge.

Safer exploration inside a 'blast radius'

Principal among the emerging themes is the recent concept of "blast-radius containment." In other words, give developers a sandbox where they can learn, experiment, and blow stuff up while ensuring that they can't accidentally take out production in the process.

During AWS re:Invent 2016, Amazon outlined a strategy for limiting blast radius. In addition, some serverless tools and libraries provide minimal support for blast-radius confinement through sub-accounts and/or AWS's Identity and Access Management permission boundaries. 

Unfortunately, multiple account solutions are sometimes difficult to justify. Many managers are reticent to take on another accounts payable just so developers can have a sandbox. 

I've seen engineering managers put engineering team AWS accounts on their personal credit cards because the business was unwilling to open additional accounts. Or developers slapping down their own plastic and incurring small monthly charges with cloud providers because they understand that they need to know the cloud as an OS, even if their manager or employer does not see that.

Layering of dev and ops responsibilities

Another emerging theme in DevOps is to restructure the division of responsibilities between infrastructure and engineering as a more layered, or hierarchical, approach.

In this model, guarded environments such as production remain under the strict purview of the infrastructure team. But less guarded environments, such as dev and staging, have more relaxed permissions, affording fuller control to engineering teams. Conceptually, this is rather obvious—the real challenges lie in extending the traditional provision and pass methods to be more collaborative, hierarchical, and blast-radius-confining.

Figure 2: Trickle-down DevOps (a.k.a. collaborative DevOps). 

In my talk at the 2016 All Things Open conference, I characterized this change as "trickle-down DevOps" (see Figure 2). I've since renamed it "collaborative DevOps," since the earlier siloed DevOps model is not truly collaborative—at least not between the engineering and infrastructure teams, where provision-and-pass has been the norm.

In collaborative DevOps, infrastructure and engineering teams use a common toolchain, methods, and key/value stores. This helps both teams specify, provision, deploy, and wire up a spectrum of target environments with permission levels governing which teams, team members, and continuous integration (CI) environments have full access to each target environment.

Modernizing DevOps to be a more collaborative workflow between infrastructure, QA, and engineering groups within the enterprise is a large topic that's beyond the scope of this article.  (Cloudbox, an open-source project I started, aims to deliver on the promise of collaborative DevOps for serverless- and container orchestration-based applications.)

4. Best practices for app configuration and staging are evolving

The oft-cited 12-Factor App—a methodology for building SaaS apps—gets a lot of things right. But the recommendations on configuration are sub-optimal, especially in the context of serverless computing, where keeping application settings in environment variables reduces flexibility and reinforces the silos that can stifle serverless application development.

While better solutions have started to emerge that are more compatible with the requirements of serverless computing, progress is needed on this front. And since application configuration and staging are critical points of baton-passing among infrastructure, QA, and engineering teams, the solution will play a key role in the ongoing evolution of DevOps. 

The AWS CloudFormation and the AWS Serverless Application Model (SAM) provide some guidance here. CloudFormation's support for stacksets and cross-stack references hints at a better direction, but packaging these settings within AWS's CloudFormation infrastructure-as-code service makes these improvements less readily available outside the infrastructure silo.

In addition, the serverless framework has implemented a rich variable passing-and-expansion language in a recent version that represents a step forward in building a more collaborative application configuration methodology. 

The Cloudbox project also aims to address this need. A Cloudbox is a hierarchical key/value storage framework that is stage- and permission-bounded. It collects the stage-specific configuration key/value pairs from a range of sources, both infrastructure- and engineering-oriented, and drives those settings into your serverless or non-serverless application. 

 

Figure 3: The Cloudbox application staging and configuration model.

A Cloudbox supersedes the notion of a stage by allowing one Cloudbox to be linked to others for ingress and egress purposes. A stage becomes just the final Cloudbox in this linked arrangement.

In addition to the configuration advantages of this model, it also allows the longer-lived Cloudboxes, which are managing persistent storage endpoints, to be managed by infrastructure, while the more transient, stateless, or computational Cloudboxes remain primarily an engineering concern.

And as illustrated in Figure 4, these transient, stateless Cloudboxes are loosely connected to "storage" Cloudboxes through a layer of indirection. As a result, each Cloudbox can be easily upgraded, recycled, and hot-reconnected back to production Cloudboxes, with no disruption to customer-facing UIs.

Figure 4: Cloudbox chaining and application staging.

5. The serverless frameworks space is maturing, with new serverless app architectures emerging

When Amazon first released AWS Lambda, the offering included only the core FaaS feature. DevOps concerns, including automated deployment, configuration, security, and code sharing, were left as an "exercise for the reader." So third-party, open-source frameworks sprang up to fill this need.

Now there are many frameworks in this category, each with its own merits, but the most popular are Chalice (based on Python) and the Serverless Framework. And while this category has been maturing rapidly, until recently most frameworks lacked the security, CI support, and DevOps compatibility to be viable for use within an enterprise.

Your own resources might have to fill the void, for now

Back in 2016, I was building a serverless application for a client that was using the Serverless Framework as a foundation. One major problem was that Version 0.5.6 of the framework recommended that you create an IAM user with full admin permissions to the entire AWS account.  

The framework authors hadn't yet isolated which specific IAM permissions were required by the framework. But clearly, giving full admin privileges to AWS Lambda and its deployment scripts would have been a non-starter—the infrastructure team would have laughed me out of the room.

Moreover, the infrastructure team was responsible for establishing and disseminating IAM permissions, but it couldn't do that in a serverless framework that it knew nothing about and that conflicted with its own DevOps toolchain.

I ended up using my personal AWS account and spent many hours isolating every permission and resource the framework used. In this way I was able to narrow the individual permissions and resources that the framework required, and I then contributed my findings back to the community.

But the problems only compounded when I tried to add CI support to the project, since the framework wasn't compatible with my client's Jenkins environment. Meanwhile, the infrastructure team insisted that if I just used Kubernetes, I could sidestep all these concerns, because they'd handle the permissions and continuous integration requirements in their silo. 

Framework limitations and options

In more recent versions, the Serverless Framework team has done a great job of addressing these deficiencies. But this experience illustrated for me that, not only did serverless frameworks have a lot of room to mature, but they also could be doing more to streamline and simplify engineering's integration and interaction with the infrastructure team and its work products.

The Cloudbox project grew out of my recognition that serverless computing architectures and their cloud-as-OS presentation were challenging the silos between engineering and infrastructure in new ways.

AWS announced its own entrant into the serverless framework category, the AWS Serverless Application Model (SAM), in late 2016. SAM is different from other frameworks in the category in that it expands the platform, rather than working around absent capabilities in AWS.

More specifically, AWS SAM extends CloudFormation to support describing and deploying serverless applications.

But AWS SAM doesn't obviate the need for third-party frameworks. All of the third-party serverless frameworks, such as Chalice and Apex, have the option to replace their custom deployment features with an AWS SAM implementation, because AWS SAM overlaps their feature sets in this area. Beyond deployment, however, their value-add remains mostly intact and unchallenged by AWS SAM.

For example, Chalice, the Python-based framework that makes it easy to build REST APIs using API Gateway and Lambda, is entirely complementary to AWS SAM. And while the Serverless Framework does overlap somewhat with AWS SAM, it takes serverless computing into cross-cloud and multi-cloud directions that are well beyond the scope and business motivations of AWS SAM.

Programming options

One final area of growth in the serverless framework category are the programming paradigms and reusable libraries used in building serverless functions. For example, given the function granularity of serverless applications, should they use an object-oriented approach, a functional programming approach, or some combination of the two? And what framework capabilities can I derive from or incorporate into my serverless functions at runtime?

This is another area where Cloudbox offers some answers. It introduces a new programming paradigm, a hybrid between functional and object-oriented programming that builds on the advances of Functional Reactive Programming frameworks such as RxJs, but simplifies and extends the programming model.

This hybrid combines functional and object-oriented programming in an entirely new way, mixing in aspect-oriented programming, dependency injection, functional promises, intelligent retry, and railway-oriented programming. The result is a single, unified methodology that is both elegant and simplifying. It's a new way to build software that maps particularly well to serverless, cloud computing, and collaborative DevOps use cases.

6. Vendor lock-in fears are resurfacing, but cloud-agnostic solutions are appearing

While the pros of serverless computing outweigh the cons, some developers and organizations are understandably concerned with the vendor lock-in implications of building serverless applications.

On one hand, serverless is based on an event publication/subscription model, a design pattern intended to reduce tight coupling. On the other hand, the serverless handlers triggered by those events typically make use of provider-specific APIs, HAL, and QoS components, which is where vendor lock-in can occur.

Given that all major cloud platforms have now introduced FaaS offerings with capabilities similar to AWS Lambda, it should be feasible for you to migrate your serverless applications to a different cloud OS, just as long as you avoid vendor lock-in with your initial provider.

This situation is reminiscent of the desktop OS battles of the '80s and '90s. Enterprise application builders at the time wanted to avoid both tying their products too tightly to Windows, Mac OS, or Unix and constraining applications to a least-common-feature set that would cripple functionality.

Back then, products such as Wind/U, from Bristol Technologies, and Mainsoft's MainWin emerged, offering a means of avoiding vendor lock-in without imposing feature limitations.

In a case of history repeating itself, several products have since emerged that are modern counterparts to Wind/U and MainWin. WalmartLabs' OneOps, for example, bills itself as "one design, any cloud." OneOps is a comprehensive solution, but it requires some commitment to its architecture to take full advantage.

There are software design patterns and libraries out there that can help you minimize and localize cloud dependency in order to achieve an architecture that you can migrate with substantially less effort. Today, the Serverless Framework is the library that comes closest to delivering on the goal of a less invasive, software-abstraction approach to cross-cloud and multi-cloud support.

Cloudbox is a more recent entrant in this category and may be used exclusively or in combination with other serverless frameworks. 

7. A hybrid model is appearing: Serverless container orchestration

A new model I call "serverless container orchestration" is emerging. It offers a new approach to achieving a cloud-agnostic solution. If there is one trend in cloud computing that rivals the significance of serverless, it is the rise of container orchestration solutions, with Kubernetes at the forefront. Currently, serverless products are usually cheaper and more immediately deliverable. But as the scale and throughput of your application increases, the ROI lines will eventually cross, and at some point, Kubernetes will become more cost-effective.

The problem is, how do you make the transition from serverless to container orchestration without starting over? Is a middle ground feasible—a kind of serverless/container orchestration hybrid?

Initially, these solutions appeared to be competitive and mutually exclusive. But it's not turning out that way. New projects, such as IronFunctions, are pointing the way toward a future where serverless becomes a spectrum of possibilities, with AWS Lambda at one end and Kubernetes at the other.

In fact, AWS Lambda and the configuration values that constrain it are becoming a kind of accidental standard that serverless container orchestration products can use to build a Lambda-like environment on any cloud.

History can lend context here. In both the desktop OS wars of the '80s and '90s and the browser wars of the '90s through today, the industry encountered the problem of unevenness. One browser or OS supported one feature set, while another supported an overlapping, but distinct, feature set.

To deploy an application across a range of platforms with uneven capabilities, people began using a methodology that the JavaScript community dubbed "polyfill." As with an uneven drywall, the target platforms had hills and valleys that needed to be filled in and smoothed to support a single-source application on top.

For example, in the case of Wind/U, some years ago Bristol had to custom-build certain print driver functionality that was absent from Unix to support features, such as a print preview, that are standard on Windows.

With its built-in load balancing, failover capabilities, and multi-cloud support, Kubernetes makes for an excellent polyfill. While Kubernetes is not truly serverless, its management, scaling, and recovery capabilities have the potential to limit ongoing maintenance long enough to offer a reasonable approximation.

Any ongoing maintenance that's necessary can be easily peeled off and centralized with infrastructure specialists who possess a good working knowledge of Kubernetes, but fewer application specifics.

For these reasons, a Kubernetes-based approach can level-up cloud platforms that lack certain QoS components, including the FaaS runtime itself. And projects such as IronFunctions, built on Containership, which in turn is built on Kubernetes, are an example of this polyfill approach recurring in a new context: the distributed, multi-cloud OS. 

Be prepared for both technical and cultural shifts

Driven largely by the rising tides of big data, the IoT, and machine learning, the last two years have brought about a refocusing of cloud computing. New architectures, SDKs, and runtime components have combined to form a new breed of distributed OS. Principle among these new architectures are serverless and container orchestration technologies such as AWS Lambda, Docker, and Kubernetes. Each is transformative individually, but they may prove revolutionary when combined.

Hybrid serverless container orchestration tools, such as IronFunctions, have the potential to reduce or eliminate vendor lock-in. They can also provide a more flexible architecture that allows redeployment as app traffic increases, from a single source to more cost-effective serverless runtimes.

But the impact isn't just technical. There's also a cultural shift prompted by the recent evolution of cloud computing, as engineering teams take a more hands-on approach with cloud SDKs and serverless runtimes.

Frameworks have sprung up over the last two years to codify best practices and design patterns for building serverless applications. But for the most part, they've done little to address the evolving requirements of DevOps or to facilitate a more dynamic and collaborative interaction between the engineering, infrastructure, and QA silos that contribute to the traditional DevOps culture.

Looking ahead, expect to see new frameworks and architectures in 2018 that attempt to directly address and facilitate a more collaborative DevOps workflow for building serverless- and container-based applications. In addition, expect serverless container orchestration solutions to become more pervasive in cloud computing over the next three years, as people seek no-compromise approaches that let them build once and run anywhere.

The stage is set for the next round of battles and advances in cloud computing, and serverless and container orchestration strategies will play a key role in deciding the winners and losers. 

Keep learning

Read more articles about: Enterprise ITData Centers