You’ve probably heard (many times) that one of the biggest advantages of serverless compute is that you don’t have to manage your application’s infrastructure, freeing you to focus on the business logic without worrying about things like backend security or provisioning servers.
This, however, presupposes that you are not a compulsive worrier about performance and reliability of your applications.
For some of us, worrying is second nature. If we’re told we don’t have to worry about a certain task because “someone else is taking care of it,” we’ll immediately jump to worrying about the person taking care of it. How are they taking care of the task? Are they doing a good job? Could I do a better job? What if they neglect some small but important issue that results in a huge, unforeseen disaster down the road? Who is this mysterious person, and why should I trust them with this very important task?
In a previous blog post, we talked about why serverless compute is a great choice for 5G applications. Choosing the right vendor for serverless compute is an important decision because it has a huge impact on your application’s security and performance, not to mention the cost of running that application. So today, we’re going to give you a close-up look behind our platform to let you know exactly how applications are run on it — providing you some assurance that our products are not only cheaper and easier to use than cloud providers’, but better performing and more secure.
Azion Cells
With Azion Edge Functions, customers can run event-driven serverless functions on our edge network. This allows developers to build edge-native applications or add new functionality to their origin applications. Once a function is deployed, it’s installed across our global network and managed by us. Our team takes care of the infrastructure, providing security, scaling it as needed, and ensuring that when the function is triggered, it is always served to end users by the edge node closest to them.
Under the hood of Edge Functions is a software stack built in four layers. We chose the V8 engine as a commonly used runtime with sandboxing features that provide the necessary security for serverless compute. Our implementation of V8 is written in Rust and eschews node.js, the engine typically used to embed V8, for a solution that is newer, simpler, and more secure. Azion Cells, our core technology, supports JavaScript runtime and will soon have support for WebAssembly, allowing developers to write in any language, using WebAssembly as a compilation target.
While V8 and JavaScript are widely used in other serverless compute and container-based solutions, our choices of Rust and engine are far less common. We believe these choices set us apart from other solutions, providing the best possible performance, resource consumption, and cost-effectiveness — three incredibly important characteristics for next-generation applications.
Rust
Azion Cells is written in Rust, a programming language which is ideal for applications with strict requirements for performance and resource consumption. Efficiency and simplicity is important to any software developer and organization, and performance is crucial for ultra-low latency applications, where every millisecond makes a difference. In addition, reliable execution will be vitally important as 5G and MEC (multi-access edge computing) enable more automation and mission-critical devices.
Perhaps the most important advantage of Rust for building core software components is its reliability. Even small bugs can be disastrous in use cases like self-driving cars, but one of the most potentially dangerous issues, memory bugs, can be difficult to find. Because memory bugs are complex and unpredictable, it can be hard to test for them. As a result, they can stay in the code undetected for years, waiting to wreak havoc. Rust guards against this disaster waiting to happen with a strict compiler that checks every variable and memory access to prove the code is correct and preventing engineers from making potentially dangerous mistakes.
Rust is also advantageous for next-generation applications due to its speed and memory use. Rather than providing safety at the expense of time-and-resource consuming functions like ongoing garbage collection, Rust allows coders to store data on the stack or on the heap. If memory is no longer needed when code is compiled, it is automatically cleaned up, allowing for efficient memory usage. In addition, Rust is statically typed and compiled, allowing for more speed and optimization.
Azion Cells Engine
Rather than node.js, the engine that AWS, Google, and many other providers use, our implementation was built from scratch, written in Rust and reaps all the advantages of Rust’s security, reliability, and speed. In addition, it is able to execute JavaScript without spinning up an entire container and node.js process. As a result, it is faster, simpler and cheaper than solutions based on node.js, allowing for better performance while using fewer resources.
In addition, our implementation is also able to improve upon some of the issues that have arisen with node.js since its creation in 2009. Ryan Dahl, node.js’s original author, has extensively discussed these issues, citing node.js’s use of modules as its biggest design flaw, citing its centralized distribution and overcomplexity. However, in 2018, ES modules were introduced as a standardized module system in JavaScript; our engine uses ES modules exclusively to create a simple and decentralized module system.
Finally, our engine provides additional security over node.js through the use of V8’s security sandbox and a number of additional mechanisms. If you’ve read our post on zero trust security, you’ll remember a key tenant of modern security is giving each user access only to the parts of a system that they need. This is a problem with node.js, which was not built with multi-tenacy in mind, making everything accessible to everyone, requiring the use of Containers. Our implementation adds strong user segregation and limits access privileged operations, making it safe to run third party code.
How Azion Compares to AWS Lambda
Like Azion’s Edge Functions, AWS Lambda gives customers the ability to write event-driven functions that are managed by the provider and charged for on an as-consumption basis. Unlike Azion, however, each AWS Lambda function is run in a separate container. Developers must decide ahead of time how much memory to allocate each function based on its code dependencies; AWS then allocates proportional CPU, bandwidth and disk I/O. Containers are spun up when the function is triggered and spun down after the function has been idle for a while. After that period of inactivity, both the container and node.js must start in order for the function to be executed, resulting in a cold start of about half a second and requiring a lot of resources to scale — two huge problems for low-latency and high-throughput applications, including 5G.
In contrast, Azion provides better performance with less hardware by using a multi-tenant environment where each function is isolated in a secure sandbox. This requires far less RAM than housing each function in a separate container, and as a result, is far more cost-effective. In addition, multi-tenancy is much faster. Since Azion Edge Functions run at all of our edge locations, when a request arrives, the function only needs to be loaded from our NVME disks to memory, which only takes a few milliseconds (as opposed to the few hundred it would take to spin up a whole container). This results in lower latency and more consistent performance. To get anything like this in Lambda, customers need to pay additional fees upfront to reserve dedicated containers in a specific region, leading to underused resources or performance hiccups if the function reaches its concurrency limit in that region.
Conclusion
As discussed in our previous post, 5G applications must meet incredibly high standards, with low latency, reliable performance, and resource efficiency chief among them. Meeting these performance metrics is a lot to worry about, particularly for those of who already tend to worry a lot. We’ve put a lot of thought into the best possible way to deliver on these metrics, building our software stack from the ground up to meet the rigid requirements of next-generation applications, providing a solution that is 100x more effective than cloud providers’ at a fraction of the cost.