In this article we are going to take a look at the serverless world in order to understand what it is and what are the main advantages/disadvantages, beyond the hype.
What does Serverless mean?
Surely, serverless means different things to different people.
Anyhow we can agree on the following:
- we pay only for what we (especially the final user) actually use;
- we have a built-in redundancy and multi AZ (Availability Zone);
- the scaling is handled by the provider;
- the provisioning and the manageability of the servers/containers are handled by the provider.
In other words, we don’t pay for the idle and developers don’t need to think about servers/containers anymore.
Of course serverless still has servers/container, but the key point is that we don’t care anymore about them.
Like Gojko Adzic said in one of his articles “It is serverless the same way WiFi is wireless”.
Serverless or FaaS?
The serverless term was popularized by AWS in 2014/2015 with the launch of Lambda and API Gateway services.
AWS Lambda is the perfect example of a serverless FaaS (Function as a Service) model.
In FaaS, the developer takes care of writing code (functions) which aims to solve the business problems. In this model, the function is the last layer of abstraction, it’s like splitting a microservice in smaller logical units.
We can still have an infrastructure to deal with. For example, we can use a framework on top of a Kubernetes cluster like: OpenFaaS (https://www.openfaas.com/), Kubeless (https://kubeless.io/), Knative (https://knative.dev/), etc.
By combining serverless and FaaS models, we are optimizing costs, scalability and focusing the development on the business value. Moreover we are making happy customers, managers and developers.
Limits and considerations
Of course serverless and FaaS have also some limits and challenges to deal with, unfortunately as now we didn’t find the silver bullet that solves all the problems and doesn’t have any downside.
These models inherit the most of a distributed system limits, such as: synchronous/asynchronous communication between services, and handle services states.
Don’t underestimate the application of specific architectural patterns and approaches based on the complexity of the project itself. Approaches such as Event Driven, CQRS, Saga are recognized as best practices and a common way to solve big issues.
Cost analysis in a serverless environment
We know that with serverless we have a significally reduction on costs, but try to calculate the real costs can be tricky. First of all we need to think about how many functions are going to be invoked and how frequently. We also need to estimate the average duration and memory allocatiocation. With these informations we can use one of the pricing calculator of our Cloud Provider and do the math.
One aspect that should certainly be considered is the free tier that is made available by the various providers which usually hovers around one million free invocations. This allows you to create a development environment at no cost.
Another thing that serverless FaaS introduces is the cold start concept. The cold start is the time that the Cloud Provider takes to prepare and initialize the function “container” in order to actually run the code. Once warmed up, the function can handle multiple invocations. One function can take only one request at a time, so if there are multiple requests parallelized, the Cloud Provider initialize multiple functions accordingly. A warmed up function is called hot function.
Maybe at this point you are thinking about the vendor lock-in that serverless produces.
Well, the vendor lock-in has always been a thing, but is it a big issue?
Of course it depends on your business needs: do you prefer a small TTM (Time To Market) or a vendor agnostic solution with a big TTM?
We have lock-in everywhere: a particular programming language, a framework or a third party library.
In the serverless world we can make some compromises to facilitate a provider migration. For example we can use the “serverless” (https://serverless.com/) framework which abstract the resources and the deployment of our application in a common configuration file. By writing code that follows the SOLID principles and best practices we can make the provider and external services easy to replace.
Serverless is not the goal.
We are trying to:
- test ideas as soon as possible;
- continuous improvement;
- deliver fast and quickly;
- own less technology;
- focus on creating business value.
Serverless right now is just the best fit.
Serverless technologies are still maturing, and every month new features are added. For example, AWS Lambda has recently introduced a configuration which allows us to maintain some already warmed up function hot, in order to limit the cold start times. It’s a continuous evolving world.