In our previous posts, we’ve looked at 5G from the perspective of networks: how they’re being deployed, what deployment entails, and how MEC can help service providers implement 5G and achieve 5G performance standards. However, the way most people will experience 5G isn’t through the theoretical capabilities of a network, but through the performance of devices and applications that run on those networks. Today, we’re going to change perspective and look at 5G from the perspective of applications. This post will discuss what is to run applications at the edge, how modern applications are created, the options for deploying those applications, and how serverless applications are run on our platform using Azion Edge Functions and our core technology, Azion Cells.
Running Applications at the Edge
Edge-native applications and devices are already fueling smart cities, smart grids, and smart houses and helping to transform the manufacturing, automotive, retail, and healthcare industries. If this list sounds familiar, it’s because these are many of the services 5G is poised to deliver. One of the reasons Edge Computing and 5G work so well together is because they were created to solve the same problems: the ever-growing army of mobile devices and IoTs, the need for machines to communicate with each other, and the desire for applications with increasingly low latency and increasingly high throughput. By offloading workloads from the cloud to the edge, developers can ensure that their applications and devices can access data as quickly, efficiently, and cost-effectively as possible.
However, Edge Computing isn’t meant to replace cloud computing so much as complement it. The Linux Foundation’s 2020 State of the Edge Report predicts that $700 billion in CAPEX will be spent over the next decade creating edge IT and data center facilities. By 2028, the collective power footprint of these facilities is expected to reach 102,000 MW, but these megawatts of power won’t be centrally located as they are with cloud companies. Instead, edge IT and data centers trade size for proximity, dispersing computing power and storage across highly distributed data centers all over the world so that it can reach end users faster. Because edge data centers are so much smaller than the clouds’ hyperscale data centers, the applications hosted on them need to be incredibly lightweight, using as little memory and CPU as possible.
Evolution of App Development
Initially, if you wanted to create an application, you needed your own servers. Since servers had no way of demarcating between different applications, hosting multiple applications on the same server would be a nightmare for dividing resources and enforcing security. As a result, each application needed its own server, which was incredibly expensive and left a lot of resources unused. Virtualization solved this problem by isolating applications in virtual machines (VMs), allowing multiple applications to be run on the same server’s CPU and leading to better resource utilization, less hardware costs, and more scalability.
VMs are emulations of computer systems; each one houses not only an application, but its own separate operating system for that application. To conserve energy and save on costs–two big concerns with 5G applications–VMs may be spun down when applications aren’t running and spun up again when the application is requested. However, it takes minutes to spin up a VM. This is a vast improvement over the months it takes to deploy a new server, but spinning VMs up and down is only effective for scaling and addressing larger patterns in usage; doing so between each request would be disastrous. Today’s users will abandon a slow-loading webpage within seconds, let alone minutes. This is to say nothing for the latency requirements of next generation applications.
In addition, VMs weren’t built for the complexity of modern applications, which break functionality into small, independent components called microservices. Each microservice is designed for a single purpose; for example, Facebook has different microservices for chatting, forming groups, and paying third-party vendors. This allows companies to distribute work effectively, add features or fix problems quickly, and separate processes across multiple servers instead of taking up lots of space on a single server. Because VMs are strictly segmented from each other, each microservice would need to be hosted in a separate VM, resulting in the same problems with scaling and resource utilization that VMs were designed to solve. To create, deploy, and scale applications with the efficiency and cost-effectiveness needed for 5G, developers must use newer and more agile solutions: containers or serverless architectures.
Container-Based Architecture
Containers are a more lightweight way of implementing virtualization. They’re also much faster to deploy than VMs, taking seconds to spin up rather than minutes. Instead of packaging each application (or worse, each microservice) with its own operating system, containers have relaxed isolation properties that allow for shared operating systems. Each microservice gets a share of CPU, memory, and disk space, resulting in a fully functional unit that can be deployed and managed independently.
However, managing these containers is a complicated task that would be frustrating and inefficient to perform manually. To avoid these headaches, many developers use Kubernetes, an open-source platform for managing containerized workloads and services. This container orchestrator ensures that applications are functioning the way they’re supposed to, even as new microservices are added or old ones are replaced.
Serverless Architecture
Serverless applications are even more lightweight than containers, breaking applications into functions that are hosted (ironically, on servers) by third-party vendors. Rather than hosting the code and all its dependencies, only the code is stored, and it doesn’t need to be constantly running. Instead, functions are activated as needed and deployed in milliseconds, with the vendor handling all backend services and charging developers on a pay-as-you-go basis.
This pay-as-you-go model is radically different from the business model of containers and other deployment methods, which require developers to provision resources ahead of time. For example, with containers, each container is assigned to a specific machine and uses that machine’s operating system. Although containers can be moved easily and deployed on a new machine in seconds, developers must determine where resources are needed themselves and pay for these resources ahead of time. In contrast, serverless functions aren’t dedicated to any one server in particular. As a result, applications can be dynamically provisioned by the vendor, executing functions when and where they’re needed.
Serverless vs. Containers
Both containers and serverless are useful for MEC because they require much less infrastructure than VMs, lowering costs and increasing efficiency. In addition, both architectures are useful for today’s applications by breaking them down into smaller components. However, when it comes to scalability, efficiency, cost, ease of use and security, there are several differences between them.
Serverless
Serverless architectures are more lightweight than containers, breaking applications down even further into individual functions. Deployment is faster, taking milliseconds rather than seconds, allowing developers to scale or address surges in requests instantly, implement new features and patches immediately, and conserve resources by starting and stopping functions to serve requests as needed. Moreover, this back-end management is done by the serverless vendor, allowing developers to focus on the application’s functionality and ensuring that developers don’t pay for resources they don’t end up using. Lastly, upfront costs are greatly reduced because developers do not have to pay for dedicated resources before deployment.
Containers
Containers give developers more control than serverless architecture. Since code is packaged with all its dependencies, developers can determine the languages and libraries themselves, making it easy to migrate applications from cloud providers. This control over backend services also prevents the possibility of getting locked in to a specific vendor. Unlike serverless functions, containers run all the time, which makes them better for long-running processes and, depending on the vendor, can ensure more reliable performance for some specific use cases. Finally, containers may offer more security than serverless architecture if vendors are hosting code from multiple customers on the same server and do not take steps to avoid data exposure between customers.
Serverless Solutions for Next-Gen Applications
When weighing the benefits of containers vs. serverless for next-generation applications, it’s important to consider what is needed both for MEC and 5G. Applications need to be incredibly lightweight to run in edge IT and data centers. 5G applications need to be highly mobile, running anywhere and everywhere with low latency and consistent performance, and should be cost-effective to run and scale, considering the amount of data that will be processed and transmitted. In addition, security will be extremely important as industries like healthcare take advantage of 5G and MEC, allowing patients and doctors to instantly share medical data.
Given these considerations, serverless has several advantages over containers: it consumes less resources, is more cost-effective, and runs everywhere, deploying and scaling in milliseconds. However, compared to containers, serverless can present performance issues with cold starts, and hosting more than one customer’s code on the same server can result in security issues.
Serverless architecture without disadvantages
Our serverless solution, Azion Edge Functions, provides the advantages of serverless without these drawbacks. We prevent serverless security issues through sandboxing. Sandboxing creates a secure environment for executing functions, even when multiple tenants (i.e. more than one customer) are hosted on the same server. Processes that are sandboxed are run separately so that one piece of code doesn’t affect or interact with another. Moreover, our software stack uses Chrome V8, Google’s open-source JavaScript engine, which optimizes JavaScript execution and ensures that sandboxing does not slow performance. In addition, our core technology, Azion Cells, has zero cold starts, ensuring that code is executed perfectly every time.
In our next post, we’ll do a deep dive into Azion’s software stack to show you the decisions we made to ensure our serverless platform provides the best possible performance, security, and flexibility.