Securing serverless apps: What IT Ops needs to know
Serverless designs are the latest trend in cloud applications. The term itself is controversial. There is no clear definition, but Peter Sbarki's is the closest to what we need. I like Peter's definition, but I prefer a simpler one:
Serverless applications use services and APIs to do most of the work. Your custom code brings these services together to solve a specific problem.
This isn't a new concept. It's the latest iteration of a concept the community has been chasing since the 1970's. The term “serverless” was first mentioned in 2012. It's only now that cloud services have matured to a point where it's practical.
There is a lot going for serverless right now. There is a community forming around these designs. Major cloud service providers such as Amazon Web Services (AWS), Microsoft and Google are pushing the concept. Large enterprises are even jumping on board.
Serverless is real, and it's here now.
But what about security? Are these designs more secure than traditional ones? How do you implement security with these designs? Can you secure these designs? Here's what IT Ops needs to know.
All cloud services work under the "shared responsibility model.” The premise of the model is simple: the provider and user share the day-to-day responsibilities for the service. The trick is determining where the division of responsibilities occurs.
As a user, your first responsibility is to understand how this model works, and how it applies to the services you're using. Remember, this covers both operations and security.
Let's look at the core of any serverless design, a functions-as-a-service offering—services such as AWS Lambda, Microsoft Azure Functions or Google Cloud Functions.
These services would fall under the "SaaS/abstract" category. The cloud service provider is going to take care of the physical layer right up to the application layer. You—the user— are only responsible for the data you put into the service. In this case, the data is your application code or function.
After you've determined that the service is an appropriate way to handle your data, you have to look at what security controls you can apply. The nature of the service is going to limit the controls you can apply. In fact, the more "serverless" your application, the less you can do to impact its security posture.
That's not a bad thing, but it does mean that we need a new approach for security.
Get your data on the map
The first place to start securing a serverless application is to map out how data is flowing. The good news is that you've already done a lot of this work when you designed your application.
Taking a security lens to your application design, list out each set of data and its relative sensitivity. Look for items like personal information (PII), financial information, credentials, session information, etc. This can be anything that might be sensitive, or combined together to reveal something sensitive.
Now for each item on the list, determine the level of risk that is appropriate for your business. Then figure out what security controls would help reduce the risk to that level or lower.
Once you know the type of data you're processing, and the level of risk you're comfortable with, it's time to start mapping. For each set of data, list the services in your application that process that data, and the available security controls for the service.
Do the controls match your risk appetite? Are there other controls that you can add into the design to reduce the risk? Are there alternative services that have a better set of controls for this data?
For some services, adding more security is as simple as turning on an offered feature. Other times, you need to apply a little more work (for example, when you're using custom encryption keys). Regardless of the level of effort, you know it's appropriate for the data and your business.
Because you have a data map and have assessed the risks, you know that you are applying the right controls at the right time in the data's lifecycle.
Quality code counts
Your data map should have a big red flag around the code that you're writing to build your application. It's the biggest wildcard because it's new and untested in production. Remember, bugs come from somewhere. Your goal is to make sure they aren't coming from your code.
Inevitably, the debate around code quality started with the first computer program. Regardless of who defines quality, one thing that is clear is that simpler is better. This is especially true in a security context.
A number of tools are readily available to help you simplify and improve your code. Source code analysis, linters, and test frameworks are great tools but they start to break down in serverless environments.
These tools are designed to analyze a complete program or a large section of a program. In serverless designs, external services make up a large part of your program. What's left is spread between discrete functions. This lack of context significantly reduces the effectiveness of traditional code quality tools.
Despite these caveats, you should still use these tools. They will flag some basic issues with your code. These tools automate the discovery of buffer overflows, code injection, and other common issues we can't seem to stop writing into our code.
In addition to general code quality, you should be very aware of code dependencies in serverless designs. The more third party code you use, the harder it is to ensure a consistently high level of quality. This can be especially problematic in node.js deployments. Reduce the number of dependencies in your code whenever possible.
These steps will help reduce the attack surface of your code and will improve the security of your application.
Trust (but verify)
A lot of security teams get hung up on the issue of trust when migrating to the cloud. How much can you trust your cloud service provider? Are they now a threat you need to account for? How can you protect your data from your provider?
By using a cloud service, you are trusting your cloud service provider. It's not a blind trust but there has to be a level of comfort in the relationship. The good news is that cloud service providers live on their reputation.
One of the ways that the providers prove their trustworthiness is via certifications. AWS, Microsoft, and Google are leading the way by pursuing new compliance certifications. Most cloud providers are following suit. Combined with a push toward opens communication and full transparency, certifications are a way that you can verify that the provider is meeting their security responsibilities.
Verification and transparency help establish and maintain trust.
The final stage for securing serverless application is monitoring usage. General application monitoring is well understood. However, serverless designs introduce some new twists.
The use of multiple services from multiple providers means that your monitoring strategy has to include dependencies. An accurate view of application health and security requires an understanding of which services are interdependent and which are critical.
For example, if you're using a third-party authentication service, that's going to be in the critical path for logging users in. If that service is unavailable, you need to understand what areas of the application will no longer work.
Most monitoring tools are not set up to examine these types of dependencies. That's a challenge that still needs to be addressed in the serverless community. In the meantime, you can use the time saved by not needing to monitor basic service health to create custom application metrics. These tailored metrics can focus on higher-level security and operational issues.
As with our previous example, if the user authentication service is down, you can now look for activity after session timeouts and other malicious indicators. Metrics and indicators like this are a more accurate and effective way of gauging the application's security.
The extra development work is well worth it, and you move away from generic security alerts to ones that are directly relevant to your business.
Serverless is here to stay
Serverless designs are becoming mainstream. The business advantages are undeniable. The top security challenge remains tackling the myth that "security's taken care of by our providers.” Once that's shattered, there is real work to be done to secure these designs.
The goal of security is to ensure that your data is being handled in the manner you expect and only the manner you expect. In a serverless world that means map out your data, write high quality code, trust (but verify!) what your provider is doing and monitor your application extensively.