Artikel
21 oktober 2024 · 4 min lästidI have been an advocate for serverless technologies for several years now. I've attended meetups and organized events. The benefits of serverless architecture were so convincing back in 2019 that I even wrote a blog post about how serverless will eat the world. In this post, I reflect on what has changed since then and how serverless is doing in 2024.
For those who are not too familiar with the topic, serverless technologies refers to an architectural pattern that favors using higher level managed services to compose the needed solution. Serverless managed services are usually paid for on a per request or event basis, meaning that functionality that doesn’t get used has zero cost.
With that out of the way, I want to add my voice to the recent discussion about the purported serverless decline. I've seen blog posts about teams moving away from AWS Lambdas, read news of Amazon Web Services shuttering its serverless advocacy program, and perused articles comparing lambda to a vanity project. So, was I wrong all along?
Well, yes and no – the benefits do not come without considerations
I'll start with the “no” since it is more straightforward. Serverless still has all the benefits I mentioned in my old blog post. It still eliminates whole classes of errors around operations and state management. You must push the state to the edges (database or the client), and your workload is guaranteed to scale within the limits of your chosen Functions as a Service (FaaS) platform. Updating a single function is possible, potentially making upgrades smaller and safer and enabling faster velocity. There are significant limitations with performance and cost, but many use cases will run well enough with minimal expense. For example, my team created a billing pipeline that handled invoices worth tens of millions monthly with a running cost of around $20 a month.
Let's get to the point where I was wrong then. At the time, the market players were working very hard to push the limits of FaaS services. Given the progress of computing power consistently lowering costs, I hoped that providers would quickly eliminate all of the significant limitations. Firecracker was showing a lot of promise, and the hope was that it would eliminate many of the problems around cold starts and enable driving costs down due to reduced hardware requirements. This did not end up happening. Cold starts are still a thing, and FaaS pricing has remained mostly the same. You still must understand all available options to determine which fits your workload best. FaaS services come with hidden limitations, and tooling still doesn't sufficiently encourage good architecture or warn about bad choices that might end up being costly.
How to know when to go serverless
Serverless solutions are still an excellent choice for many use cases that do not require high frequencies and super-low latencies. However, if those things are required, you can run some semi-serverless options with minimal infrastructure, like AWS Fargate. The point is to avoid infrastructure complexity and focus all of the team's talent on the actual problem – and containers do that well. There's even some innovation around running containers in AWS Lambda, like AWS Lambda Web Adapter, which enables more flexibility for where you deploy your code and how you develop it locally.
There is also hope for FaaS. The problems around the current limitations are by no means insurmountable. There should be more investment in pushing out the current limitations to the point they no longer matter. For example, currently, you need to scale the CPU and memory for a Lambda function as one single tunable parameter. Most functions need very little memory, but you need to grab a gigabyte to get the necessary CPU. This leads to overprovisioning memory and waste for AWS and contributes to the inability to drive AWS's costs down.
Similarly, provisioned concurrency is quite expensive, even though execution environments just waiting to serve should not need as many resources as a running function. I'm just guessing here, but this may have to do with how AWS has decided to enforce security between functions using a virtual machine. That level of security makes sense between functions of different customers but is overkill for a single customer, in a single account, in a single region.
True serverless FaaS is at its best in naturally event-driven and asynchronous scenarios. Many data streaming use cases can greatly benefit from the infrastructure simplicity gained by using serverless functions triggered by streamed events. It's good to keep an eye on the volume, as high-volume streams will be cheaper to run in a constantly running container on a platform such as AWS Fargate. You can also mitigate costs related to high throughput streams by appropriate batching when real-time processing is not required.
In conclusion, as I did back in 2019, I still find serverless solutions are an excellent fit for many things. However, I'd like to see more progress in tackling the complex performance engineering challenges that serverless platforms face today.
Read more about how we built a fully serverless invoicing data pipeline for Transval.