Serverless computing: 5 things to know about the post-container world

At the 2014 AWS Reinvent conference, Amazon Web Services announced a new service: Lambda. Instead of loading your application code into a virtual machine or a container, you upload it into Lambda. There it sits, dormant, until some external event triggers it, whereupon the Lambda service brings your app out of quiescence and executes it. Once the application completes its task, the code is automatically removed from the Lambda service.

Lambda represents a major departure from the mainstream path of computing that stretches back to mainframes and continues through today’s infrastructure-as-a-service (IaaS) cloud computing offerings. All previous forms of computing have required that the application operations team manage the execution environment that runs the application code.

Not with Lambda, which has ushered in the era of serverless computing. Here's a look at how serverless works and what you need to know about it.

State of Performance Engineering Report

Serverless apps in a nutshell

The phrase that describes the new Lambda-style paradigm is “serverless.” Remember that term—you’ll be hearing it a lot in the near future. Even though Lambda has been available as a fully released product for only just over a year, it has generated a lot of experimentation and even production use. And, in a sign of how promising the technology is, all of the other big cloud providers—including Microsoft, Google, and IBM—have rushed their own versions of Lambda to market.

But what are serverless applications? In the Twitter stream at the initial, recent Serverlessconf, many snarky observers proclaimed, “It’s not really serverless, is it? After all, the code has to run somewhere.” That’s true. Lambda and its kin don’t get rid of the need for someone to run an infrastructure environment. What serverless does is shift the responsibility for running that environment from the user to the cloud provider—and in that simple sentence a revolution resides.

Moving the responsibility upward

Serverless computing moves the responsibility interface upward, by a lot. We used to say that AWS’s responsibility ends at the hypervisor. It was up to the user to figure out the right instance family and type, load the application code into it, and monitor and manage the application’s virtual machines to ensure that everything stayed up and running.

Containers provide a lighter-weight execution environment, making instantiation faster and increasing hardware utilization, but they don’t change the fundamental application operations process. Users are still expected to take on the lion’s share of making sure the application remains up and running.

With Lambda, AWS takes on the responsibility of making sure that the application code gets loaded and executed, and it ensures that sufficient computing resources are available to run your Lambda code, no matter how much processing it requires.

Containers: Lamba's not-so-secret sauce

In terms of the technical underpinnings of Lambda, at Serverlessconf, Lambda general manager Tim Wagner confirmed what everyone thought: Lambda is itself based on containers. It’s just that AWS takes on the responsibility for loading the code into the container and running it. The limits AWS imposes on the Lambda user include:

  • The code must be less than 250 MB uncompressed, or 75 MB compressed
  • It must run for no more than five minutes
  • It can access no more than 512 MB of ephemeral storage

This last limit points out something quite salient about Lambda: Its fundamental design assumption is that your code can run in a bounded fashion, with no need for significant resources and no need to run for extended timeframes. It’s in and out with Lambda.

3 benefits of serverless computing

So why are people so excited about the serverless paradigm? To my mind, there are three significant benefits.

You can focus purely on your application's functionality and code

The launch of cloud computing decouples application delivery from infrastructure ownership and management. The cloud made it possible to deploy an application without ever worrying about hardware.

Lambda extends that trend and further lightens the load on application developers. In turn, this lets IT organizations increase their focus on the most important part of IT: the application itself. If you believe that software is eating the world, then sloughing off low-value activities to allow greater investment in high-value application functionality is a huge win for IT, and for the organization it serves.

It increases the efficiency of IT spend

Even though cloud computing got rid of the infrastructure investment, everybody knows someone who lets cloud virtual machines run endlessly while doing no productive work. Managing capacity, even virtual capacity, is difficult. Lambda eliminates the need to do that. Instead of operating an execution environment 24/7, Lambda lets code execution run only when needed, triggered by some event that represents a request for computation.

A good fit for event-driven applications 

Finally, the Lambda paradigm of executing code only when needed is a good fit for an emerging category of event-driven applications that will represent an ever larger proportion of corporate application portfolios in the future. Event-driven applications may support the IoT or user-driven content submission (think Snapchat).

Such applications are event-driven, episodic, and subject to unpredictable traffic patterns. As such, they're a terrible match for traditional virtual machine- or container-focused application architectures. But they're a great match for short-duration, single-event transactions.

Serverless computing cost savings will be dramatic

People don't yet fully appreciate how much more cost-effective serverless computing can be. For example, serverless architectures let you avoid paying for idle resources, and most IT organizations don't have a handle on how big those costs can be because they have no idea how lightly used their cloud resources are. In my book AWS for Dummies, I included a chapter on AWS costs. I cited statistics provided by Cloudyn, a cloud economics management company, indicating that most virtual machine instances run at sub-20% load. In addition, many companies pay for unused Amazon Elastic Block Storage (EBS) file systems.

With serverless, all that wasted money comes back to you. You pay only for what you use and avoid the headache of virtual machine instances and EBS altogether.

So how much can you save on your cloud computing bills? Anecdotally, I’ve heard of situations in which a user's AWS bill dropped by 90%. At Serverlessconf, two presentations on Lambda-based applications indicated that they had not even surpassed the daily free limit on Lambda calls. In other words, they were operating inside the Lambda free tier. This, despite the fact that both presenters were discussing real-world applications with significant levels of use, as opposed to prototype or proof-of-concept efforts.

Perhaps the most astonishing thing about Lambda and its serverless counterparts at other cloud providers is how willing these organizations are to cannibalize their own offerings. It's hard to think of another situation in which a tech vendor voluntarily brought something to market that undercut its current offering's price by 90%.

It’s unprecedented.

Once people comprehend the financial benefits associated with serverless computing, there will be strong growth in interest, experimentation, and adoption.

Welcome to the post-virtual machine, post-container world

Cloud computing has brought enormous change to the world of applications. It makes long-standing constraints on application development and deployment disappear. It’s no exaggeration that most of the innovation in IT over the past decade has been enabled, catalyzed, or caused by cloud computing.

We're now on the cusp of another cloud revolution: the move to serverless computing. It promises to change application paradigms even more than cloud computing has done and holds out the tantalizing possibility of moving to a post-virtual machine, post-container world.

Of course, as the saying goes, you can’t make an omelette without breaking a few eggs. Likewise, serverless computing is going to break a lot of existing practices and processes.

In part two of this discussion on serverless computing, I’ll describe the challenges and how IT organizations can think about addressing them so as not to fall short of the potential of serverless computing.

State of Performance Engineering Report
Topics: IT Ops