Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Choosing Your Backend: Onsite vs Cloud vs Serverless vs Edge

Sign upSign InSign upSign InAziz NalFollowITNEXT--ListenShareWhether you’re building the next blockchain app that’ll change the world or creating yet another twitter clone, you’ll need to deploy your app somewhere.In this article, I talk about the models of Serverful, Serverless Functions, and Edge Functions; comparing them in terms of what they are, what they are not, and drawing some interesting comparisons between them.After reading, you’ll have a better idea of what the difference between serverful and serverless is, and specifically, how conventional serverless functions differ from edge functions.There are two ways to group the models in this article:In a way, we can view these as a series of abstractions:The next section covers detailed breakdowns of each model.With serverful, you have a couple of main options. First, you could host your servers yourself, on-site. That means you physically have the servers somewhere you can access and you configure their networking and how you deploy your apps to them mostly manually.A server is anything from your 10 year old laptop with a broken screen to a fully-loaded server roomThe second way you would do serverful is by deploying a server on the cloud. This way you leave managing a lot of the complexity of a server to your cloud provider and you’re able to focus more on making use of the server.Regardless of which one you choose, serverful grants you fine-grained control over your server and the type of technology you want to use on it with little limitation.The main difference between onsite vs cloud is usually costs. You can get a feel for the costs with the following charts:In the above chart, assume only the costs of the servers themselves are included. Realistically you would also have the costs of the salaries of the engineers who setup and maintain these servers but that has to do more with the complexity of your setup rather than using onsite vs cloud servers, so they’re not included in this chart.With onsite, you have to purchase all your hardware upfront. Other pains include having to make sure that your hardware is compatible both for your current usage as well as for potential upgrades down the line. But once you’ve made a purchase, that’s really it mostly.The cloud is a sort of trap. It starts dirt-cheap since you literally only rent the amount of hardware you need. As your traffic increases, you get to add more powerful hardware quite easily.The issue here is that cloud providers’ pricing gets really crazy really quickly after a certain threshold, to the point where even Amazon themselves have found that moving away from it saves costs substantially.Again, these costs don’t cover your DevOps team salaries. It’s just the rented hardware and services.The saying goes that serverless is not really serverless, it’s just someone else’s server which is true but not really the point if you think about it.Serverless is an auto-scaling, pay-per-execution, provider-managed architecture.In serverless, you write your code as functions which you then deploy that correspond to servers managed by your cloud provider.With a serverless setup, you would use a cloud provider’s services, such as AWS Lambda, Google Cloud Functions, or Vercel, and they handle deploying your code, as well as scaling it depending on your current traffic.A Function may be called programmatically from your code, or it could be associated with a URL.Got 0 users? No worries, you’re charged per execute so your cloud provider won’t contribute to your bankruptcy (this time ;) ).Got a sudden influx of 10 million users compared to the 3 million you’re used to because it’s black friday? Have more functions! Have all the functions, and then have some more 🤑A serverless function is created when an event (e.g. a request) triggers it to be created. The startup of a new function is called a Cold Start. Once the function is started, it’s called warm.A warm function stays active for a short period after it’s initial cold start, allowing it to handle incoming requests without the need for a new cold start. AWS, for example, keeps a few lambdas warm for faster request handling, with a tradeoff of slightly more costs.As a warm function is handling a request, new requests are queued for handling after the current one is finished.If more requests are received than a single function can handle, then a new function is cold-started and the cycle starts again.If a function is idle for long enough, that is, if no requests are received, then it’s terminated.There are a few new challenges one must consider when dealing with these short-lived functions, such as amount of database connections currently active to your database. If setup incorrectly, each call to a function may be a new connection to your database which would eventually saturate the database and cause you to self-DOS.Of course, for each problem there is a solution. For the example database connection saturation issue, the solution would be connection pooling.Serverless functions, just like a cloud server, live in a region of your choosing. Typically, you can choose to deploy all your functions in one region or use a different region for each function depending on your use case.However, when choosing your deployment region, you should be mindful of where your other services live, such as your database.Edge functions are still serverless, but with a bunch of ups and downs.Edge functions are functions that run on the edge of the network, as close to the caller as possible.This may seem like a perfect drop-in solution for better performance at first, but there are quite a few caveats when it comes to edge functions.For example, consider the following situation:It may not be obvious why this is a bad setup, but that’s because this diagram is lying to us. It’s not as simple as sending a request from the function to the database, rather it looks more like this:Think about it. It’s very likely that you would need to query your database multiple times in a request. For example, even in a simple sign up request, you would need a couple of queries:In this situation, the long distance between the edge function and the database can add significant latency to the response. A better setup would be to deploy the edge function close to the database:Although we’ve added more latency between the caller and the edge function, the real latency is between the edge function and the database due to the many potential round-trip requests.This is dependent on your particular edge function.The main advantage of edge is quicker response times. This is due to two main reasons:The biggest down side with edge is usually compatibility. Since edge runs in a very light-weight environment, you miss out on a lot of APIs and libraries.For example, Vercel edge functions state that you have no access to most native Node APIs. This extends to you not being able to use most database drivers as well.Other limitations include:In the table below, you can see comparisons of different aspects of the deployment models mentioned in this article.----ITNEXTWeb Developer experienced with Angular. Very passionate about software engineering and architecture. Also learning human languages!Aziz NalinITNEXT--Jacob FerusinITNEXT--8Mahdi MallakiinITNEXT--4Aziz NalinITNEXT--Matteo Bianchi--29Mahesh SainiinInterviewNoodle--45Mahdi MallakiinITNEXT--4Dr. Ashish BamaniainLevel Up Coding--61SamsonkinAWS Tip--10Leonardo Zanivan--3HelpStatusAboutCareersBlogPrivacyTermsText to speechTeams



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Choosing Your Backend: Onsite vs Cloud vs Serverless vs Edge

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×