Since its inception, the serverless model has revolutionized computing—but how has it changed security? Serverless computing platforms enable developers to build and run applications that are managed by the serverless provider, with automatic scaling and resources charged on a per-use basis. This means that the underlying infrastructure is not only provisioned and managed, but secured by the serverless provider, so developers don’t need to worry about some of the greatest security risks, such as known vulnerabilities in unpatched servers.
Instead, they can focus on protecting user data, third-party services, and writing secure application-layer code. This post will provide a guide to understanding serverless security as well as the products and best practices developers can use to protect against threats to serverless applications.
Shared Responsibility Model
As noted in a recent whitepaper by Trend Micro, “The serverless model is regarded as relatively more secure than other cloud models because [the provider] takes care of the underlying infrastructure, the operating system, and the application platform.” However, while serverless does significantly reduce the security responsibilities for developers, it doesn’t mean developers can disregard it entirely. Serverless uses a shared responsibility model where serverless providers take care of protecting many backend components, such as the OS and infrastructure, while serverless customers are responsible for securing client-side components.
This is similar to cloud computing, but with even more security responsibilities shifted to the provider, since cloud customers need to manage their applications’ containers and—in the case of infrastructure-as-a-service—their operating system.
How Serverless Changes Security
Serverless Architecture
In addition to changing the way application resources are managed and paid for, the serverless model enables a new way of building applications through the use of serverless functions, which allow developers to create event-driven functions for running discrete pieces of business logic that are connected by APIs. Serverless functions are stateless, short-lived, and ephemeral, meaning that they are executed when they are called and data is not stored on servers between instances.
This application architecture presents both opportunities and challenges for developers. As noted by researchers in securing serverless functions, “serverless imposes strict limits on the time available to the adversary to retrieve sensitive data from functions or to move laterally to perform more sophisticated attacks.” In other words, new instances are constantly being created when functions are invoked, and eliminated after a period of inactivity. This means compromised execution environments do not stay compromised for long, giving attackers much less leverage to wage attacks and security teams far more opportunities to detect them. However, their ephemeral and stateless nature can make serverless functions difficult to monitor, and the increased reliance on third-party services can create new threats if APIs aren’t inspected for vulnerabilities and securely integrated.
Securing Servers and Data
Arguably the biggest way in which serverless changes security is by reducing the number of tasks that developers need to worry about. Not needing to manage OS patches is a huge coup for serverless customers, since older vulnerabilities in servers remain a major factor in cyber attacks, as noted in Verizon’s 2021 Data Breach Incident Report. With serverless providers managing this task, the risk of unpatched servers is low. However, removing this threat doesn’t mean cybercriminals will go away. Fewer vulnerabilities in servers incentivizes attackers to seek out misconfigurations and vulnerabilities in applications and their code.
In addition, as an article on the application architecture site InfoQ explains, just because functions are stateless does not mean that data is completely secure. Instead, sensitive data may be stored outside the server, changing the way that data must be secured: “The data is at risk when transferred, is likely to persist longer and be accessible to more machines, and if the data store is compromised, more users will be impacted at once.”
Denial-of-Service
Finally, serverless computing changes the way companies experience denial-of-service (DoS) attacks. These attacks aim to knock servers offline by bombarding them with compute or memory-intensive traffic. With serverless, there is a degree of baked-in resistance to DoS attacks, since applications scale automatically to accommodate added traffic. However, large-scale and distributed denial-of-service (DDoS) attacks can still generate enough traffic to overwhelm capacity, target the DNS or network, or create significantly higher billing due to increased resource use. To protect against these threats, serverless customers may consider additional security measures, such as rate limiting and DDoS protection plans.
Best Practices for Serverless Security
Applications
With a clear understanding of how serverless changes security, we can start to consider the best practices for securing serverless applications. For starters, shifting attackers’ focus from servers to applications and their underlying code requires strong protections for applications, such as a web application firewall, which can protect against application-layer attacks including OWASP Top 10 vulnerabilities.
And since serverless allows developers to spend more time focusing on their code, it’s all the more important to prioritize good coding practices, like writing “pure” functions that will not cause any side effects, which mutate variables, making testing and debugging much more difficult. In addition, developers should foreground security early in the CI/CD process, incorporating a strong code review process with automatic inspection for known vulnerabilities.
Third-Party Packages and APIs
Although third-party packages make code easy to deploy, failing to carefully inspect them can give attackers a foothold into your application. Serverless expert Jeremy Daly recommends the following safeguards for secure integration of third-party services:
- check all dependency chains for third-party packages;
- avoid packages with lots of dependencies;
- use an authentication method for every endpoint;
- don’t rely solely on API keys if you allow users to modify data; and
- build access control lists to limit what average users can do with your APIs.
Access and Authentication
When it comes to securing access to applications, one of the most crucial policies serverless security teams can adopt is the least-privilege principle. This means that each process, program, or user should only have access to the information and resources needed to achieve its designated purpose. In doing so, least privilege limits the damage that malicious actors can do if they gain access to your system. It also improves system stability and simplifies troubleshooting, as it limits the potential interactions between different functions or modules.
In addition, authentication tools should be used to verify authorized users, and logging practices should be put into place to enable the monitoring of stateless functions. Not only is it important for developers to use console.log for items like failed logins, account modifications, and database interactions where records are inserted, changed, or deleted, but they must take care to ensure their logs and alerts are not exposing sensitive information, such as plain text passwords, that could be used by attackers to gain unauthorized access.
Data Security
As previously noted, serverless applications that store data outside the server must be particularly careful about securing data, as it is vulnerable in transit and will be accessed by more machines. As a result, it’s very important that serverless developers take steps to encrypt data in transit. In addition, blog post on the engineering site Distillery.com notes that developers should protect data in serverless applications by taking steps such as:
- encrypting sensitive persistent data;
- encrypting sensitive off-box state data;
- keeping the functions that can access each data store to a minimum;
- implementing granular access permissions for access to data repositories; and
- using analytics tools to monitor which functions are accessing which data.
Serverless Security with Azion
Azion’s Edge Computing Platform provides integrated, full-stack security to help developers build and secure serverless applications. Our serverless compute product, Edge Functions, provides built-in security using V8’s sandboxing capabilities to keep each function kept secure and isolated.
In addition, our security solution, Edge Firewall, enables the creation of zero-trust security policies designed for modern applications and infrastructure. Its benefits include:
- Robust and performant protection against application-layer attacks through Web Application Firewall’s scoring-based algorithms
- Intelligent rule sets that let you block, rate-limit, or allow requests based on your business’s customized needs
- Integration into your SIEM, Big Data, or analytics solution for real-time threat intelligence and monitoring
- Up to 5 Gbps DDoS protection for all Azion users at no additional cost, with 20 Gbps, 50 Gbps, and Unlimited plans available for protection against the largest attacks
- Fine-grained permissions for authorization and authentication using our Real Time Manager control panel
- SSL Certificates at no additional charge for securely transferring data over HTTPS
To find out how Edge Firewall can improve your serverless application’s security, contact our sales team or set up a free account and start building custom security rules today.