This method of Software Development has taken off over the last decade, with practices such as delivering working prototypes and incremental releases being the norm now. The term “You snooze, you lose” has never been truer in the IT industry. If you look at most high performing IT companies, you’ll notice that they deploy releases more frequently with smaller lead times.
Lean Management and Continuous Delivery practices is behind creating the conditions for engineers to deliver software faster and sustainably, and today, we’ll be looking at the meaning behind it, and how AWS helps.
Continuous Integration is a software engineering practice which merges all developer work copies to a shared mainline. It’s the process of automating the building and testing of code every time a developer commits a change to the Version Control Service (e.g. Git Hub). This kind of system is meant to encourage developers to share their code after each task completion, and once that code is shared via a Revision Control , it triggers an automated build system which grabs the latest code from the Revision Control System, and deploys the code.
Continuous delivery is the ability to get the changes of all types (new features, configs, bug fixes, etc) into the hands of users, safely and in a sustainable way.
Continuous Integration / Continuous Delivery Architecture
When using AWS for CI / CD, some of the resources that you are likely to use are:
- AWS CodePipeline – A continuous delivery service you can use to model, visualize, and automate the steps required to release your software.
- AWS CodeBuild – A fully managed build service that compiles source code, runs unit test cases, and can be used for SonarCube Analysis as well.
- AWS CloudFormation
- AWS Elastic Container Service (ECS) and Elastic Container Registry (ECR) – Amazon ECR is a managed AWS Docker registry service. Customers can use this to push, pull, and manage docker images. Amazon ECR provides a secure, scalable, and reliable registry. (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS.
In this architecture (steps are numbered in the diagram):
- Once AWS CodePipeline detects a change in the Github repository, it will grab the change and send it to CodeBuild.
- AWS CodeBuild will build a docker image using the latest code, which it got from the CodePipeline. The image will be tagged, and pushed to the Amazon ECR.
- AWS CodePipeline initiates an update of the docker image to the AWS Cloud formation. AWS CloudFormation in turn defines the task definition and the service.
- CloudFormation then revises the task definition in the ECS by referencing the docker image in the ECR, and will then update the ECS.
- ECS fetches the new container and replaces the old container.
AWS CodePipeline is the infrastructure which creates the environment for all other services to work together and carry out continuous deployment. It automates fetching changes from code base and the trigger build, and the testing and deployment process.
A single unit of execution. It can be a code build, test suite, run or deployment process. We can add other AWS services and an action. In this architecture, we added the AWS code build and AWS cloud formation templates as actions. Actions can be executed sequentially (one after the other) or parallelly.
A collection of actions. It can be considered as one level of a continuous deployment process. In our simple pipeline, we have a source stage which is responsible for triggering the pipeline when there is a source repository change, and to get all necessary resources for the code build stage. The Build stage is responsible for building the project, running tests, verifying the code and producing a deployable artifact which can be used in the deployment stage.
The deployment stage is responsible for deploying the artifact produced by the build stage in different environments. All the transaction between stages take place when all the actions in the first stage are successful.
Aws CodeBuild is an on-demand service (where you only need to pay for what you use) provided by AWS. It has a few in-built environments like docker, java, python, c# core, and node. The user can select the build environment they want.
The service is supposed to scale when the number of running tasks increases, meaning that engineers do not have to worry about the CD server overload or scaling the server when the number of projects increases.
In this architecture, the step is supposed to be used for building the docker image of the latest version and pushing it to an Elastic Container Service (CSR). In addition to this, we can use this step to run unit tests, regression tests, check code quality and verify application security and many more.
AWS Cloud Formation defines the common language to describe & provision for all infrastructure resources in an automated and secure manner. It allows us to define the project deployment infrastructure in a Json or YAML file which is called Cloud formation templates (CFT).
This makes it easy to configure the infrastructure, troubleshoot issues and recreate the same deployment setup. Almost all AWS services can be configured using CFTs. You just need to write the CFT and upload it to the cloud formation service, and it will automatically acquire all the resources necessary and build all the deployment infrastructure.
In this architecture we use the cloud formation service mainly to generate and update the stack and task definition which is associated with AWS Elastic container service (ECS). We can also use CFTs to define the CodePipeline and CodeBuild as well.
AWS Elastic Container Service (ECS) and Elastic Container Registry (ECR)
ECS is a highly scalable, high performance container orchestration service that supports Docker containers and allows a user to easily run and scale containerized applications on AWS. The service gets the docker image as input and takes care of all the deployment (according to the deployment process which we define). This provides zero downtime.
The service is also responsible for the high availability of the application, efficient resource allocation and making sure everything runs smoothly.
Elastic Container Registry (ECR) is a docker repository which is hosted by AWS. We build and push a docker image in the code build step after which we define the task definition in the form of a CFT.
We provide the image tags in the task definition. Another CFT for service needs to be configured. Within that configuration, we can provide the task definitions which need to be defined.
Service is the unit which is used to configure all the deployment architecture. With a service, we can define deployment related parameters like the number of instances, placement strategies, load balancer etc.
When we run the CFT of task definition and service with the Cloud formation service, it generates tasks using task definitions and services using service CFT. The service is deployed in one cluster. Cluster is a collection of resources. The service is going to use the resources of the cluster to deploy the application.
These are only a few ways we’ve discovered in which AWS helps to maintain fast delivery in a sustainable way.