Did you know clouds have sharp edges?
What is Edge Computing?
Let’s say you want to deploy a web application.
In order to serve your app to your users, you need a server on which you can run your application.
Traditionally, you had the option to either buy and run the server yourself in your own location, this is called on-premise hosting, or you could rent server resources from a cloud service provider (CSP) and host your application out of their data center [1]. Depending on the CSP you choose they might have several data center locations to choose from, in the case of Amazon Web Services (AWS) this could for example be us-east-2, which is in Ohio in the United States [2].
When accessing your web application, users all over the world would need to connect to your server in this one location. Due to the large distance to your server, users on different continents like Europe or Asia would experience high latency when accessing your app, resulting in a bad user experience [1].
To fix this issue you could consider scaling your application by either spending a lot of money on renting more servers around the world, or you could use a Content Delivery Network (CDN) like Cloudflare to cache static content like HTML and CSS files on servers located closer to the user [1 ]. With the latter approach, the Website would appear to load quicker for the user because they get the cached files from the CDN directly rather than from the far away data center [3]. This is great for static files, however, anytime your web application has to process data from the user, this needs to happen on your primary server, as the CDN traditionally does not provide compute resources [1 ][4].
Enter edge computing.
Unlike the conventional data center approach, where data is sent to a central location for processing, edge computing aims to reduce latency and increase bandwidth efficiency by processing data on devices at the “edge” of the network. This results in faster response times and better real-time processing capabilities [5]. You could compare edge computing to a CDN for executable programs / non-static applications [1].
So what is the edge? [insert U2 joke here]…
Unfortunately, there is no clear definition of what exactly the edge of a network is — and so-called “edge nodes”, the devices running on the network edge, can take many forms, from small IoT-devices to larger computer systems in regional data centers [4][5].
In this specific context, where user-devices communicate over the internet with an application hosted in large data centers (i.e. “the cloud”), edge nodes would refer to servers in smaller, local data centers, like those run by CDNs. The important distinction here is that edge nodes are geographically closer to user’s devices than cloud servers [5].
The benefits of Edge Computing
The obvious main benefit of edge computing is the improvement in latency and response times as data processing can take place closer to the source of the data, i.e. the user [5]. For someone living in Stuttgart, Germany who wants to access a web service on an edge node in Frankfurt, Germany rather than in a data center in Ohio, USA this can make a difference of 300 ms round-trip time per request (70 ms vs. 370 ms) [6].
Another benefit is improved usage of network infrastructure. Because data processing is spread all over the world, so is the traffic your application generates.
Processing at the edge of the network allows data to be filtered and pre-processed before being transmitted to a central data center, reducing the amount of data that needs to be transmitted over large distances. This can result in cost savings, depending on what your CSP charges for network traffic to and from your server. In a world that is becoming ever more data driven it is necessary to manage and process data efficiently [5].
It can also improve the reliability and availability of your web application. Hosting your app in one single data center means that, if there is for example a network outage at that location, your entire user base is affected by this disruption. Whereas if just one edge node experiences an outage only the part of your user base that is close to that edge node is affected. And because there is this redundancy the users might not even experience a complete outage but rather higher latencies as they are connected to a different edge node. Though admittedly, CSPs like AWS have multiple so-called availability zones per region in order to prevent such an outage based on a single point of failure [2].
In the same way, edge computing improves availability through higher resiliency against Denial-of-Service attacks. Attackers will have a much harder time targeting hundreds of instances of an application as opposed to one.
The pitfalls of Edge Computing
So far, we’ve learned that edge nodes are basically just more data centers that are positioned closer to users than others. So CSPs just need to build more data centers in more locations, where is the catch?
The answer lies in the difference in compute, network and storage resources of the data centers. AWS has fewer full-blown data centers than Cloudflare has edge nodes but they are far more powerful and can handle larger workloads. Because edge nodes have limited resources they need to be used more efficiently.
The solution edge computing platforms have opted for is to only offer running serverless applications and workloads on their systems. This limits your use cases for edge computing as you won’t be able to host any form of application with a long runtime [1]. One example would be cloud gaming, a use case which would really benefit from low latencies. Unfortunately games are played over an extended period of time and they are fairly hardware intensive making them unfeasible for edge computing.
Furthermore, in some cases, edge computing is not always faster than standard cloud computing. To illustrate this, let’s compare two scenarios, where your application needs to access a central database and perform multiple calls to complete a specific function. The user of your application is far away from that data center and experiences a round-trip latency of x milliseconds [1]:
In scenario one, your application is run in one data center, the same data center your DB is hosted in. When the user wants to complete the specific function, they call an endpoint on your application. It takes half of the round-trip time (½ x) for that call to reach the data center where the app runs. The app connects to the database multiple times, which doesn’t take long as it’s hosted in the same data center. Afterwards the response is sent back to the user, taking another half of the round-trip time (½ x). The overall latency was x, or one round-trip latency, plus some negligible latency for every DB access.
In the second scenario the application is run on the edge but the database is still in that far away data center. Now when the user calls an endpoint on the application, they experience negligible latency when making the call but when the application accesses the database it experiences the round-trip time x for every one of the multiple accesses. The resulting latency is the count of db calls per function times x plus the round-trip time from the user to their nearest edge node.
As a result, when you want to host an application on the edge you should make sure that the app was designed to be run this way, taking care that it doesn’t need to make multiple calls to a central resource to complete a task. Otherwise, and somewhat counter-intuitively, it might be a slower experience for the people far away from the central resource than if the app wasn’t hosted on the edge in the first place [1].
Another caveat is that the term “edge” is not really defined properly. Depending on the edge platform you choose, the edge nodes may be just as sporadically scattered as large CSP data centers. The edge computing service offered by Vercel heavily relies on AWS’s infrastructure, effectively making their edge as far away as the cloud [7].
A look at the industry
Let’s have a look at Cloudflare, a prominent example of what started as a simple CDN and internet security company that has since evolved into an edge computing platform.
In 2017 the company introduced Cloudflare Workers, their serverless computing solution based on their content delivery network infrastructure [8]. Since then they have been building up their edge computing platform by introducing Workers KV, a simple key-value data store that can be accessed by the serverless workers, Queues a message queue system for Workers and D1 which is SQLite databases for Workers [9][10][11]. They also recently announced R2, a storage bucket service fully compatible with AWS’s S3 service API, that unlike S3 does not charge for egress traffic making it very competitive in terms of price-to-performance [12].
Cloudflare isn’t the only company expanding their business portfolio, other CDNs seem to have smelled the blood in the water.
In 2022 Akamai, another leading CDN, acquired CSP Linode for $900 million dollars, with the goal of building out their own edge and cloud computing platform [13]. They now offer EdgeWorkers and EdgeKV a serverless compute platform and key-value store respectively [14].
The CDN Fastly also offers what they call Compute@Edge as a serverless compute platform based on their edge network [15].
Because of their main CDN business, these companies already have a lot of data centers around the world, which they can expand and invest in to provide these edge computing services.
And then there are the big CSPs like AWS, Google Cloud Project (GCP) and Microsoft Azure who haven’t been sleeping while the CDNs expanded their business. These Cloud Computing Platforms also offer edge computing services on their own edge network. Most offer serverless computing on their edge locations; some even offer large enough customers installation and maintenance of server hardware on their premises, effectively building a private edge node in the customer’s own location [16][17].
And their edge network isn’t necessarily any less dense than the CDN’s networks. In Germany AWS has edge data centers in 4 cities, which is just one less than Cloudflare’s 5 locations. In the US it’s 44 AWS locations versus 39 Cloudflare locations [18][19]. AWS’s edge network of over 400 cities total pales in comparison to Akamai’s network of over 1000 but trumps the network of Vercel’s 19 locations [18][20][7].
Use Cases
The question that now remains is: Who needs edge computing anyways?
The main target audience for edge platforms like Cloudflare or Vercel are web developers who use frameworks like Next.js, SvelteKit or React which are designed to run routes on serveless platforms [21]. While this use case doesn’t really require the lower latencies offered by edge computing, the overall user experience can definitely benefit from it.
Like briefly mentioned before, highly interactive use cases like cloud gaming or remote desktop workplaces would extremely benefit from the low latencies. When playing a fast paced game like a competitive shooter every millisecond makes a difference. However, these use cases require sustained high performance and are difficult to achieve on the serverless edge platforms.
You could also consider smart homes being a fitting use case that would benefit from edge computing. Though, ideally smart home devices shouldn’t need to communicate with any cloud at all, they should be able to process all data required to run the smart home locally in case of a network outage. Then again, because of the unclear definition of what counts as the edge, IoT-devices like sensors, cameras, etc. could be considered edge nodes – technically making what they do edge computing.
So as of today, it seems that the edge computing platforms the CDNs offer don’t really enable any new or groundbreaking use cases. The hybrid cloud approach provided by the likes of Google or AWS, where they install an edge node on the customers’ premises, allows for a little more flexibility as use cases are not limited to serverless applications. But the solution they are offering at this point is essentially just Hardware-as-a-Service.
However, just because there are no use cases that necessitate edge computing today, doesn’t mean the technology is pointless.
In the future smart cities could be powered by edge computing, with autonomous vehicles communicating with each other and traffic lights to create optimal traffic flow. Or artificial intelligence using real-time sensor data to answer questions with current information, like “How busy is the supermarket right now?”.
In conclusion …
Edge computing undoubtedly has benefits for cloud applications in terms of latency and efficiency but there are a lot of pitfalls developers need to be aware of, when deploying apps on the edge. The technology has the potential to be disruptive to the industry as even though it’s much more limited than standard cloud computing, it’s extremely good at one thing – the latency – and it’s a much cheaper solution for providers and customers alike. However, the ambiguity of the term “edge” and what it actually means, makes it, in my eyes, unclear whether people will adopt it because they truly understand the benefits of it or because it’s just another buzzword that sounds cool.
Written by: Nikolai Thees
Sources
[1] Fireship – Is “edge” computing really faster?
https://www.youtube.com/watch?v=yOP5-3_WFus
[2] AWS Documentation – Regions and Zones – AWS EC2
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
[3] IONOS Digital Guilde – What is a CDN (Content Delviery Network)?
https://www.ionos.com/digitalguide/hosting/technical-matters/what-is-a-cdn-content-delivery-network/
[4] Akamai – How Does Edge Computing Work
https://www.akamai.com/our-thinking/edge-computing
[5] Cloudflare – What is edge computing?
https://www.cloudflare.com/learning/serverless/glossary/what-is-edge-computing/
[6] CloudPing – AWS Latency Monitoring
https://www.cloudping.co/grid/p_98/timeframe/1Y
[7] Vercel Documentation – Supported Regions for the Edge Network
https://vercel.com/docs/concepts/edge-network/regions
[8] Cloudflare Blog – Introducing Cloudflare Workers
https://blog.cloudflare.com/introducing-cloudflare-workers/
[9] Cloudflare Blog – Building With Workers KV, a Fast Distributed Key-Value Store
https://blog.cloudflare.com/building-with-workers-kv/
[10] Cloudflare Blog – Cloudflare Queues: globally distributed queues without the egress fees
https://blog.cloudflare.com/introducing-cloudflare-queues/
[11] Cloudflare Blog – Announcing D1: our first SQL database
https://blog.cloudflare.com/introducing-d1/
[12] Cloudflare Blog – Announcing Cloudflare R2 Storage: Rapid and Reliable Object Storage, minus the egress fees
https://blog.cloudflare.com/introducing-r2-object-storage/
[13] TechCrunch – Akamai reaches for the cloud
https://techcrunch.com/2023/02/13/akamai-reaches-for-the-cloud/
[14] Akamai – Edge Compute Solutions
https://www.akamai.com/solutions/edge
[15] Fastly – Serverless compute environment
https://www.fastly.com/products/edge-compute
[16] AWS – On Premises Private Cloud
https://aws.amazon.com/outposts/
[17] Google Cloud – Google Distributed Cloud Edge overview
https://cloud.google.com/distributed-cloud/edge/latest/docs/overview
[18] AWS – Global Infrastructure
https://aws.amazon.com/about-aws/global-infrastructure/
[19] Cloudflare – Cloudflare Global Network
https://www.cloudflare.com/network/
[20] Akamai – Connected Cloud Computing Services
https://www.akamai.com/why-akamai
[21] Vercel – Edge Functions
https://vercel.com/features/edge-functions
Leave a Reply
You must be logged in to post a comment.