Rich Buckley

Rich Buckley

Vice President Consulting Expert

A tech news story hit the headlines at the beginning of May following the publication of an article from the Amazon Prime team. The headline of reducing costs by 90% by scrapping serverless in favour of a monolith made for an interesting read. As a big fan of the serverless approach and having used it successfully on a few projects, I wanted to articulate my thoughts.

Serverless for me, is a Cloud solution for application deployment that does not require you to think about the underlying hardware used. I have worked with applications running on co-location hardware. With just a modest server and storage estate, you might expect a disk or cache battery failure or firmware update every couple of weeks that needs to be managed. The benefits of not having to think about hardware failures when using Cloud virtualisation are very real. Serverless removes the worry of managing even the virtual servers. You pay for a unit of execution and generally don’t pay when idle.

AWS Lambda is a well understood serverless pattern. Small units of code execute in response to a trigger. They are short lived and can process an API (Application Programming Interface) call or a message or respond to an event such as, a file being created by another process. AWS also provides Fargate, which is a serverless compute platform that can be used to run containers without needing to curate your own swarm of worker compute nodes.

Back to the published article and the claim that serverless is more expensive than a monolith. That surely depends on what your workload is and what you are optimising for. With "Cost Optimisation" being a pillar in the AWS Well Architected Framework, service decomposition and inter-service communications should always be viewed through a cost lens.

The Prime video case is data and compute heavy. It makes sense (in hindsight) to convert a stream once and use the resulting frame in memory by one or more other processes. A frame can be disposed of once processing is complete. No need to persist that data long term. I wonder whether this new application architecture could have been realised without first going through the micro-service/serverless route? It would appear the optimisation was a result of applying the DevOps Feedback / Plan cycle having identified the scaling bottlenecks and cost drivers. The first implementation was probably optimised for speed of delivery and may also have hit the scaling and performance budget criteria at the time.

I have successfully used Lambda on a task that occurred infrequently (once a day normally) and involved processing files that contained many (10s to 100s) of independent data streams. It was possible to split (via a Lambda) the original files into a smaller file per data stream and cascade the processing of each stream by a Lambda execution triggered from the smaller file creation. Structured naming conventions allowed the final results to be served directly to a web browser with no further processing. The solution was quick, self-scaled, fault tolerant and inexpensive to run. The equivalent monolith would have required thread orchestration and a permanently running process checking for new work.

I have also used Lambda to handle API calls from a business web application. The benefits here are scalability in the case of high usage, and zero run cost when the application is not in use. The data ingestion in this application is low bandwidth so there is little need for long running application processes.

In conclusion, the 90% cost saving cited by Amazon Prime team is based on their workload and cannot be applied generally. There are use cases where serverless constructs are the right tool for the job. Plan for what you know and identify what you are optimising for. Robust test automation will allow units of code to be reused in a variety of target models. Deliver something end-to-end, monitor it and use the feedback obtained to plan and influence future iterations. Total cost of ownership should be considered and serverless has advantages on deployment speed, scalability, security, patching, resilience and support which many of our clients have as concerns.

  • To find out more about how CGI can help you identify opportunities for digital transformation optimisation, contact Rich Buckley.

About this author

Rich Buckley

Rich Buckley

Vice President Consulting Expert

Rich joined CGI in 2020 and worked initially as the solution architect and technical lead on the NERIMNET project for BEIS. This was a transformation programme relating to the monitoring of nuclear incidents and response management. More recently, Rich has been working with the Office ...