In a previous blog post, we discussed about the benefits of the Serverless computing model. AWS Lambda is a platform built on the Serverless computing model in which we just need to code the business logic and don't have to worry about the servers.
In this article, we will discuss the benefits of using AWS Lambda, how we use it at Clappia, some of the challenges with Lambda and how we overcame those.
Cost is one of the major benefits of using the Serverless model. With AWS Lambda, we need to pay only for the resources consumed during serving our requests. As there are no dedicated servers, there are no upfront costs. If our application doesn't receive a single request in a day, there are no charges for Lambda for that day.
In addition, AWS Lambda provides a free tier in which the first one million requests are free every month. This helped us save a lot of cost in the initial days of our setup.
With AWS Lambda, a lot of maintenance hassles just go away. We don't have to deal with issues like excessive CPU utilisation, heap memory out of space, server and process restarts. This means that the developers can focus only on solving the business problems and Lambda will take care of the rest.
As with any other Serverless platform, AWS Lambda is inherently scalable. Once we deploy the Lambda functions, we won't have to revisit them when our traffic increases by even a thousand times.
The beauty of AWS Lambda lies in its ability to integrate with a lot of other AWS Services. At Clappia, we use AWS Lambda with services like AWS API Gateway, Step Functions, SNS, S3, DynamoDB Streams etc.
API Gateway is a service provided by AWS that lets us deploy secure APIs. It acts as a gateway between client-facing applications and the backend logic implemented in AWS Lambda.
At Clappia, we follow a Microservices architecture in which we have separate services for managing different entities - Users, Apps, Workplaces, Workflows, Notifications etc. All the functions in each of these services are implemented as functions in AWS Lambda and exposed to the front-end clients using API Gateway.
All the APIs hosted on API Gateway are secured using Lambda-based Authorizers that allow only authenticated users to access these APIs according to their permissions in the system.
AWS Step Functions is used an orchestrator to combine multiple Lambda functions to achieve a complex use case. It supports functionalities like condition-based branching, waiting, error handling and parallel execution of functions.
We use Step Functions to power the Clappia Workflows. All Clappia Apps can have multiple complex workflows which can involve actions like sending emails, mobile notifications, SMS, WhatsApp messages, integrating with external APIs and databases, sending data to other Clappia Apps and IF/ELSE logic. We translate these user-defined Workflows and generate a state machine in Step Functions. Know more about Clappia Workflows here.
Step Functions gives us the capability to add many more similar Workflow actions in no time. We just need to implement their corresponding Lambda functions and Step Functions will take care of including them in the orchestration.
We have some use cases which involve tasks that can run asynchronously without having the end-user to wait for the completion of these tasks. For example, when an App admin assigns a Clappia App to another user, there is a non-critical task of sending an email to that user. When a user makes a submission in a Clappia App, we need to execute the Workflow associated with that App, but this execution can happen asynchronously.
For such use cases, we use SNS. SNS uses the Pub-Sub messaging mechanism to send messages between different Lambda functions. So when an App is assigned to a user, the first Lambda function actually updates some entries in a database to reflect the permission changes, then publishes an SNS Message and returns a response to the user. The SNS Message gets consumed by a second Lambda function that sends out the email. With this approach, we reduce the latencies of user-facing business logic by handling off-loading portion of the logic to other Lambda functions.
After serving a request, the Lambda execution container becomes inactive after sometime if it doesn't receive more requests. And any request coming after this will face a Cold Start problem. The Lambda container will be provisioned again and the deployment package will be loaded in the container, only then the function will get executed. So we notice latency spikes for requests that come after a delay.
We followed a couple of approaches to mitigate this problem.
With AWS Lambda, we cannot cache the responses of any API calls that we make. So if there is a Lambda function which makes an API call to get the user name using an email address, this call has to be made every time the Lambda gets executed, even if the user name is not likely to change frequently. This leads to increased latencies of the Lambda functions and also increased costs because of the number of invocations of the dependency API.
We mitigate this problem in two ways -
Once we start implementing our business logic in AWS Lambda and integrating Lambda with other AWS Services, there are chances that we will have to continue to keep our entire stack on AWS forever and we won't be able to try out some products of other Cloud providers that better suit us.
We have addressed this concern by keeping the business logic handler decoupled from Lambda handler. That way, if we plan to move away from Lambda in future, the effort would be lesser as we won't have to touch the business logic.
We also use the Serverless framework that allow us to have separate configuration files for different Cloud Providers but common handlers for business logic.
Sarthak Jain, Co-founder & CTO of Clappia
He can be reached at firstname.lastname@example.org.
We are building a revolutionary No Code platform on which creating complex business process apps is as easy as working with Excel sheets. Visit www.clappia.com to give it a try.