There was an era in which we all fussed about cloud computing; however, right now the hype is mainly about serverless computing. In this article, I will brief you about serverless computing and share my experience in working with some serverless technologies that my team and I used to develop enterprise solutions.
My list of topics are as follows – each will have a quick introduction the technology used along with some web links which we looked at when integrating these into our final solution.
- Serverless Computing
- Architecture & AWS Services
- Lambda Functions for Microservices and BFF
- API Gateway
- Cognito for User Federation
- ECS Fargate for Long Running Tasks
- AWS Code Pipeline & Code Build for CI/CD
- Other Services
Before diving into the concept of serverless computing itself, let’s move back in time and investigate how serverless evolved.
As illustrated in the above diagram, over the years we’ve evolved from physical servers and VMs to Cloud; each with their own pros and cons. In the case of serverless computing, some of its key benefits are as follows:
- Fully Managed by the service provider: no provisioning, zero administration, high availability
- Developer productivity: code focused, reduces time to market
- Continuous scaling: automatic scaling, ability to scale up and down
If you’re a developer, then you will see a huge benefit by switching to serverless since you don’t have the added issue of managing the servers on which you deploy your applications since they will be managed by the service provider. (Server administration / No patching J)
However, like all technologies, using a serverless stack comes with its own concerns too. To overcome these drawbacks, you will need to choose serverless tech stack very wisely to suit your application design; if not, you may end up with an unusable solution.
When speaking of the experience I gained while implementing a 100% serverless solution in AWS eco system, it is important to note that there are many service providers for serverless stack but this article speaks purely about AWS services.
Before elaborating on the tech aspects of the solution, I will first give you a brief introduction into the business requirement expected out of this solution.
After analyzing the requirement, we were able to identify the following components that the solution would need;
- A web frontend for end users to be able to interact with and manage the rebate program
- A rebate agreement management service (API) to track the rebate details between the customers and the service provider
- A rebate calculation service (API) to view calculated rebates
- Daily jobs that should run to calculate rebates
With the above requirement in mind, we researched AWS services which we could utilize to satisfy the requirement and match each of the components listed above. Based on this, we were able to come up with the below architecture.
Architecture & AWS Services
We also considered the following non-functional requirements when selecting AWS services:
- This application is not a mission critical system
- Limited expected transactions per second
- Daily job to calculate rebates will run only for few hours
In this article I hope to concentrate on few key AWS services out of the services listed in the diagram. Let’s start with Lambda functions.
Lambda Functions for Microservices and BFF
Considering the above requirements, we planned to use Lambda functions for the two services (Agreement & Rebate Calculation API Services) and the BFF (Backend for Frontend) which will act as a proxy between static web pages and backend APIs.
When implementing APIs as Lambda functions, we used Java script as the development language and this choice was made mainly to eliminate the cold start issue in Lambda functions. When comparing JS and Java there is a very noticeable difference.
In addition, we used NodeJS for the API implementation and a Serverless framework for the build and deployment automation. Another key benefit to using a serverless framework is that you have the luxury of local deployment, which is lacking in many other serverless frameworks. Choosing NodeJS and implementing it as an express application is a good strategy to consider because at any given time, you will be able to deploy the serverless function on a managed server with minimum code changes. Considering the usage pattern and load; having the two services and the BFF as Lambda function will also contribute to a huge cost benefit. Since we are using Lambda functions to deploy APIs, it’s a mandatory requirement to expose these APIs through the AWS API Gateway. We can now jump-in and examine the AWS API Gateway.
The reason that it’s mandatory to expose our APIs through the API gateway is that it will function as the event listener for our Lambda functions. As you know, if our Lambda function is not in use, it will slip into sleep mode. A listener should be there to listen to any incoming requests to our API and start the function to serve those incoming requests. One point to note here is that, by default; the API gateway will expose its APIs as public endpoints. At any given time, you can configure it as private API gateway, but keep in mind that there might be some additional work involved in the implementation in order to do so.
Cognito for User Federation
Since most of the enterprise applications are bound with an Identity provider (ex: Azure) for user signing, AWS provides a ready-to-use service to handle it. In our application we used this service and it saved us lots of coding time, which we then spent on implementing the user federation module. You just need to implement the authentication part in your frontend (BFF) and configure the API Gateway to validate the token with Cognito for each request coming in. Cognito userpool will carry-out the validation with IdP and we didn’t have to be concerned about it.
ECS Fargate for Long Running Tasks
AWS Fargate is a technology for deploying and managing containers, which frees you from having to manage any of the underlying infrastructures. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers.
When a task is started on a container instance, either manually or as part of a service, it can pass through several states before it finishes on its own or is stopped manually. Some tasks are meant to run as batch jobs that naturally progress from PENDING, to RUNNING, to STOPPED. Other tasks, which can be part of a service, are meant to continue running indefinitely, or to be scaled up and down as needed.
Points to note:
- Any number of tasks can run parallel and you do not need to worry about provisioning resources for those tasks.
- Each task has its own run-time environment.
- In our use case, to run the daily rebate calculations which would require processing of millions of records, we used Fargate. By running multiple Fargate tasks, we were able to complete the rebate calculations which require heavy computation in around 20 minutes; which in turn can result in significant infrastructure cost savings. The diagram below shows the process of ECS Fargate task execution.
However, keep in mind that Fargate is not cost effective if your run your tasks for longer hours. Get the best cost efficiency while using it for high computation tasks which run for less than an hour.
Next, let’s look at the AWS code pipeline and how it covers the CICD flow of your application.
AWS Code Pipeline & Code Build for CI/CD
For any enterprise solution, we require a proper pipeline for application delivery process; and the AWS code pipeline and code build fit in brilliantly with serverless applications deployed in AWS stack as Lambda functions. Since we used serverless framework, we were able to configure our pipeline to deploy our application in multiple stages within a single pipeline without much effort. Some of the key aspects we considered, when configuring the pipeline are;
- When doing the prebuild ensure unit tests were run and nothing had failed
- After deploying the source to any stage ensure integrations tests have been passed
- Code quality checks are met before deploying to any stage
- Run the regression tests after the QA release
When configuring the pipeline, we followed this detailed step by step article. If you’re using the same stack feel free to follow it, it works like charm.
|S3||Hosting the static web pages (React.js)||Youtube Link|
|CDN service which binds with S3 and Certificate Manager to enable SSL for your application|
|Route 53||DNS service which binds with cloud front to expose the application in the cloud|
|KMS & Parameter Store||Key store and application parameter store to keep your parameters secure|
|Cloud Watch / Dashboard||Service that tracks the logs and events of your services|
|Aurora DB||RDS service provided by AWS as a managed service|
We hope that through this article you were able to gain some insight into the 100% server-less approach and that you feel confident of how and when to use them. If you’re thinking about moving to serverless stack, feel free to browse the provided links in each section in detail to integrate those services in your unique use case.
Article by: Udara Wijeratne — Senior Technical Lead at Sysco LABS