CI/CD Processes and Tools for AWS Elastic Beanstalk

May 17, 2023
/
Ibexlabs
/
No items found.

CI/CD processes allow developers to worry less about infrastructure and deployment, and more about developing robust and reliable apps. Developers can be more involved in monitoring how a production environment behaves when new codes are introduced. Quality assurance becomes a part of the development process too.

The introduction of a CI/CD pipeline to a development workflow also means updates are no longer stressful. There is no need to bring the entire app down just to update a small portion of it. Services like AWS Elastic Beanstalk certainly make integrating pipelines services like EC2 and S3 even easier to do, since Elastic Beanstalk handles most of the heavy lifting for you.

Designing Good CI/CD Processes

Before we get to automated deployment to Elastic Beanstalk, however, it is important to refine the CI/CD pipeline for the cloud environment. Elastic Beanstalk is designed from the ground up to be robust and scalable, so you have to adapt your pipeline to fully leverage the advantages offered by the cloud platform.

For starters, you can standardize the development and deployment environment. A pipeline run must not be allowed to modify the environment because additional runs will face difficulties in adapting to the new changes. Instead, the runtime must keep your environment clean.

Quality assurance and security checks need to be parts of the process. For the pipeline to have shorter cycles while remaining reliable, these tasks need to be embedded into the workflow. You can keep your cloud environment safe by integrating security into development.

Last but certainly not least, make sure reviews and checks are performed before deployment. Scripts need to be checked against known attacks and malicious scripts databases to make sure that bad lines of code never get to the production environment.

That last part is important. You can configure Elastic Beanstalk to expect certain flags or configuration entries before deploying a new update. As for integrating the pipeline itself, the process is fairly easy with the tools that are now available.

CircleCI as a Bridge

CircleCI is a very popular tool for managing CI/CD pipelines. The tool is designed to work seamlessly with various Amazon services, including AWS Elastic Beanstalk. Integrating Elastic Beanstalk is easy when done through CircleCI.

You start by creating the environment needed for your CI/CD pipeline. You can separate staging and production, have an extra cloud instance for development, and configure the pipeline in any way you like. Make sure you create config files that match your development and deployment cycles.

You then need to add a new role to your AWS Management Console or IAM console. The role is created specifically for CircleCI to access your cloud environment. Add your AWS credentials to your CircleCI, and you are all set.

Deploying to Elastic Beanstalk—considering you already have a config file ready to go—is as simple as running the eb deploy command. Everything else happens automatically. You can use --profile to specify a specific profile.

Leveraging AWS CodePipeline for CI/CD

AWS CodePipeline is an Amazon Web Services tool that helps you build, test, and deploy code in continuous iterations every time there is a code change, based on the release process models that you define. Using CodePipeline allows you to orchestrate each step in your deployment release process. As part of your setup, you can also plug many other AWS services into CodePipeline to complete your software delivery pipeline, including AWS Elastic Beanstalk.

This involves first creating the pipeline using AWS CodePipeline, then leveraging a GitHub account, an Amazon Simple Storage Service (S3) bucket, or an AWS CodeCommit repository as the source location for the sample app’s code. AWS Elastic Beanstalk acts as the deployment target for the sample app. Your fully completed pipeline, using these tools in sync, is able to detect any changes made to the source repository containing the app code and then automatically update your live app to match.

Rapid Deployment with Jenkins

Another popular tool among developers is Jenkins. Jenkins is known for its support for even the most complex architecture, plus it works really well with AWS services through plugins and add-ons.

In the case of Elastic Beanstalk, you don’t have to worry about compatibility and reliability issues. The AWS Elastic Beanstalk Deployment plugin, which works with Jenkins 2.6x and up, simplifies the packaging and deployment of new applications to the Elastic Beanstalk environment.

The plugin lets you start the process by clicking on the Deploy into AWS Elastic Beanstalk; yes, it’s THAT simple. You can still configure how you want the deployment job to be performed, but everything else is completely automated.

Both new jobs and updates are handled with simple commands. You can build an archive and customize your build, but then let the plugin handle the rest. The same is true with environment update, since you only need to run CreateConfigurationTemplate to capture existing configs.

Now let’s see these processes in action.

#Deploymentprocesses of Elastic Beanstalk

Start by Setting Up a CircleCI Pipeline

To begin with a CircleCI pipeline, head over to the website here and choose the ‘Log in with Github’ option (you will need to authorize CircleCI through Github). 

Now, click ‘Settings and Projects’ to locate your Github repositories

Choose Environment variables and insert your AWS IAM access key ( AWS_ACCESS_KEY_ID ,  AWS_SECRET_ACCESS_KEY ). These will be necessary later on:

Now, choose ‘Add Projects’ on the sidebar, locate your Github repository and click ‘Set Up Project’:

A screen showing you how to set your project up will then pop up. The most critical item on the page is “Create a folder named .circleci and add a file config.yml"

If you look in the Github repository here, you can see that I have already created this file:

version: 2
jobs:
 test:
   working_directory: ~/app
   docker:
     - image: circleci/node:latest
   steps:
     - checkout
     - run:
         name: Update npm
         command: 'sudo npm install -g npm@latest'
     - restore_cache:
         key: dependency-cache-{{ checksum "package.json" }}
     - run:
         name: Install npm dependencies
         command: npm install
     - save_cache:
         key: dependency-cache-{{ checksum "package.json" }}
         paths:
           - ./node_modules
     - run:
         name: Run tests
         command: 'npm run test'
 deploy-aws:
   working_directory: ~/app
   docker:
     - image: circleci/python:latest
   steps:
     - checkout
     - run:
         name: Installing deployment dependencies
         working_directory: /
         command: 'sudo pip install awsebcli --upgrade'
     - run:
         name: Deploying application to Elastic Beanstalk
         command: eb deploy
workflows:
 version: 2
 build-test-and-deploy:
   jobs:
     - test
     - deploy-aws:
         requires:
           - test

The process of this build’s execution is as follows:

  1. A Node Docker container runs the initial task, ‘test’.
  2. If they exist in the cache, the node_modules are restored, otherwise, they get installed.
  3. Tests are run.
  4. At this point, Docker layer caching is leveraged to speed up the performance of the image  building.
  5. The Elastic Beanstalk CLI tool now gets installed.
  6. The app gets deployed to Elastic Beanstalk with eb deploy . This command works as our policy has authenticated us in CircleCI with the  AWS_ACCESS_KEY_ID, and  AWS_SECRET_ACCESS_KEy  environment variables.

So, how does EB deploy work out where to deploy to? To choose a location, you will need to configure the .elasticbeanstalk/config.yml file in your Github repository to match the Elastic Beanstalk application you have created:

branch-defaults:
master:
  environment: ebdev-env
environment-defaults:
e:
  branch: null
  repository: null
global:
application_name: CircleCI
default_ec2_keyname: null
default_platform: arn:aws:elasticbeanstalk:us-west-2::platform/Docker running on 64bit Amazon Linux/2.14.0
default_region: us-west-2
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: git
workspace_type: Application


With everything now set up, click ‘Start Building’ in the CircleCI dashboard and, your project should test, build and deploy successfully.

Using AWS CodePipeline

Select the “Code Pipeline” service from the AWS console, it will be under Developer Tools.

Click on the Create Pipeline button.

You’ll have to give a name to the pipeline and attach a service role.

In the advanced settings, you can choose where to store the artifacts associated with the project and also how you want the data to be encrypted at rest, and with which keys.

The next step is where you choose the source provider. These are the options that CodePipeline provides. This will be the location from where we want to import our code for the project.

If you choose GitHub, then you will have to connect to it and provide details. Identify the repository and branch name from where you want to pull code.

You can also then choose how you want the changes in the code to be detected, either using GitHub webhooks or AWS CodePipeline.

The next step is to build the code. This step is optional though depending on what language the code is written in.

For example, if the code is in PHP, then this step can be skipped, whereas if the code is in Java or Node.js, then we will have to use this step.

The last step is to deploy, we have to choose a deployment method.

If AWS Elastic Beanstalk is chosen, the region, along with the application name and the environment name should be mentioned.

You can then review all the details before creating the pipeline.

Jenkins

Establish a connection between Jenkins and SCM (ex: git/bitbucket). The connection can be achieved with password authentication or through SSH keys. We establish a connection through SSH keys by generating an SSH key on the Jenkins server and that provides both a public and private key. Collect the public SSH key and update that on SCM and use the private key on the Jenkins server as shown below.

Login to the Jenkins server and execute the ssh-keygen command and just proceed to click enter for the different fields to generate public and private ssh keys which are id_rsa and id_rsa.pub respectively in .ssh folder of the home directory.

Copy the content of the id_rsa.pub key and update the data in Github's repository setting>Deploy Keys>Add deploy key with some arbitrary title for the key.

Click on the ‘Add deploy key’, update some title and paste the content of id_rsa.pub key’s data at the key section and click on ‘Add’ after checking the ‘Allow write access’ option.

Now copy the content of id_rsa(private key) file and update the Jenkins server credentials on the dashboard at Credentials>Systems>Global Credentials>Adding Some Credentials.

Jenkins dashboard>Credentials

After clicking on ‘Add some credentials’, select the ‘SSH Username and Private’ key option, update some random username, and paste the private key content that's copied from id_rsa key and save.

After that give a name as the ID, Description, User Name and check the enter directly option then paste the private key.

Optimizing the Jenkins Pipeline

The Jenkins Pipeline provides a comprehensive suite of plugins that supports the full implementation and integration of a continuous delivery pipeline into Jenkins. Here, we would be writing Jenkins pipeline script in Groovy so that the job executes all the stages that include fetching the code from SCM, build, test, deploy, etc.

Creating a Jenkins pipeline job and configuring SCM periodically polls the SCM to check whether changes were made (i.e new commits), it builds the project if new commits were pushed since the last build and trigger jobs if any are shown below.

Click on a new item on the Jenkins dashboard, select Jenkins pipeline job.

Select the “Github Project” and Enter your Github or bitbucket repo. 

Select the “Poll SCM” under the build triggers and update the Cron expression. (ex: H/5 * * * *, that polls GitHub at every five minutes interval for recent commits)

The Pipeline gives us a couple of options for whether to update the Jenkins pipeline (Groovy) script in the job or whether to adjust the pipeline script from SCM.

The Pipeline script allows us to keep the script in Jenkins pipeline job. Pipelines are Jenkins jobs enabled by the Pipeline plugin and built with simple text scripts that use a Pipeline DSL (domain-specific language) based on the Groovy programming language.

Select a pipeline and input a script.

Then click on save.

Pipeline then fetches the DSL (Domain Specific Language) script from the SCM. This is typically called ‘Jenkins file’ and is located in the root of the project.

After selecting Git, we may have to update:

  • The Github Url with credentials to authenticate Github (global credentials that we have configured above)
  • The branch on which we are keeping the Jenkins file
  • And the script path where we keep Jenkins file on that specific branch

Then click on ‘Save’.

As per the Jenkins file, we may see different stages of the job execution as shown below.

The Right Tool for Elastic Beanstalk

So, which tool is best to use? All are capable in their own ways. Choosing between them means taking a close look at the kind of pipeline you want to set up and how you can best benefit from automation. They all integrate well with other Amazon services too, so you will have no trouble utilizing other services while automating your CI/CD pipeline.

Ibexlabs is an experienced DevOps & Managed Services provider and an AWS consulting partner. Our AWS Certified DevOps consultancy team evaluates your infrastructure and make recommendations based on your individual business or personal requirements. Contact us today and set up a free consultation to discuss a custom-built solution tailored just for you.

Ibexlabs

Ibexlabs is a team of disruptive thinkers. We support businesses by building scalable, agile, and innovative infrastructure through AWS and DevOps technology. We help companies of all sizes get the most from the cloud and their applications.

Talk to an Ibexlabs Cloud Advisor