Setting up a Rails app for CodeBuild, CodeDeploy, and CodePipeline on AWS

March 15 Bullet_white By Derek Bullet_white Comments Comments

If you’ve followed along with our previous episodes, we’ve covered many different aspects of setting up a production service. We’ve used many different products to simplify the day-to-day operations of running and maintaining an application.

We’ve used Scout for monitoring our application, LogDNA for aggregating our logs, HoneyBadger for our exception handling, and a host of AWS services for running our services, managing our SSL certs, hosting our Docker images, etc.

But one thing we haven’t focused on tidying up yet is one of the places we spend most of our time. Building features, merging those features, running tests, and deploying that code.

In today’s episode, we’ll be talking about how to use a few AWS services — including CodeBuild, CodeDeploy, and CodePipeline — to stream line getting features in front of our customers.

Getting Started

Before we get started, let’s discuss the concepts of Continuous Integration(CI) and Continuous Delivery/Continuous Deployment(CD).

In simplest terms, Continuous Integration refers to the practice of merging a developer’s work back into the mainline branch as often as possible. The typical process for this is creating a feature branch off of master, completing the feature, then merging the code back to master.

The reason for keeping this feedback loop very short is so that you don’t run into integration issues when trying to merge your code back into the main codebase. Having to spend hours or days trying to dig through numerous conflicts from multiple developers working on the same code is an inefficient and frustrating waste of time.

However, Continuous Integration doesn’t just begin at the point of a developer creating a branch to start feature work. Since the whole idea is based on getting the code merged back into the mainline branch sooner, it has to begin at the planning stage. It makes sense for the team as a whole to break down deliverable functionality into the smallest possible subsets. If you’re trying to quickly merge features back into your mainline branch, but your features take weeks to build, then you’re going to have a bad time.

Continuous Delivery/Deployments build off of Continuous Integration by using automation to bring the fast-paced delivery of features and functionality to your users.

There are important differences between Continuous Delivery and Continuous Deployments, but we’re not going to dive into that in this video.

While simple implementations of continuous delivery are not difficult to set up, it does require changes and ongoing effort to maintain a delivery pipeline that you can trust. Developers must think about performance implications, write meaningful tests, and understand the implications of data migrations. Developers and QA have to make sure that acceptance tests capture all scenarios and are rock solid.

Now that we have some basic understanding of Continuous Integration and Continuous Delivery, let’s work on automating our build process for produciton app.

To implement our delivery pipeline, we’ll be using 3 AWS services:

  • CodeBuild: the service that will be responsible for running our tests and building our Docker image.
  • CodeDeploy: the service that will be responsible for updating our task definitions to use our new Docker images and rolling the deployment out to our services.
  • CodePipeline: the service that will create our pipeline by tying our code push (web hooks), with CodeBuild and CodeDeploy.

To begin the process for automated deploys, we need to first make sure all of our tests are passing.

  ➜  rspec
  F....FF..................
  Finished in 1.18 seconds (files took 6.59 seconds to load)
  28 examples, 3 failures

  Failed examples:

  rspec ./spec/features/checklists_spec.rb:40 # Checklists sharing a checklist with another user
  rspec ./spec/mailers/share_checklist_spec.rb:10 # ShareChecklistMailer email renders the headers
  rspec ./spec/mailers/share_checklist_spec.rb:16 # ShareChecklistMailer email renders the body

Oops; we have a few failing tests where I haven’t updated the specs after changing the mailer to use a background service. For now, we’ll skip those tests and address them later, which will give us a good test to see if broken tests halt our automated deploy.

➜  rspec
...*....**..................

Finished in 0.88064 seconds (files took 7.4 seconds to load)
28 examples, 0 failures, 3 pending

Alright; next we need to change out the database we are using for our test runner. Right now, we’re using our postgres database for tests, but I don’t want to have to set up a Postgres database as part of the build. Instead, we will use SQLite.

To set up SQLite for our tests, we need to make a few changes. First, we need to add SQLite to our gemfile.

# Gemfile
gem 'sqlite3'

Next, we need to update our database.yml

# config/database.yml
test:
  adapter: sqlite3
  database: ":memory:"

Last, we need to spin up our test database with the correct schema, ready to run tests. To do that, we’re going to set up our rails_helper.rb to load in the db/schema.rb

Note: This method will not work well if you are expecting migrations to add data to the database.

# spec/rails_helper.rb

# Load schema
ActiveRecord::Schema.verbose = false
load "#{Rails.root.to_s}/db/schema.rb"

Now, if we bundle and run our tests again, we should be able to verify our tests still run.

Once that’s done, we can move on to the last piece of setting up our codebase, before switching over to the AWS console.

For this, we need to create a buildspec.yml file in the root of our project. It should look like this:

If we look at this file, we notice that it’s very similar to the commands we’ve been running throughout our videos. We’re simply getting our token to push to our repository, building our Docker image, and pushing that image to our repository.

We also notice an artifacts section. This imagedefinitions.json file is created so that we can use this information to update our task definition. Specifying the file in the artifacts section means that it will be pushed to S3 and we can reference this file later in the pipeline. In our case, the CodeDeploy step will use this artifact. More information about the buildspec.yml can be found in the build spec reference.

Once we have all of this finished and committed, we can move over to the AWS console.

Summary

In today’s video we’ve briefly discussed CI and CD. We also introduced CodeBuild, CodeDeploy, and CodePipeline. Lastly, we prepped our database and codebase to switch over to the AWS console.

Resources

Our full series on deploying to AWS

Get notified of new posts.

Once a month, we'll deliver a finely-curated selection of optimization tips to your inbox.

Comments

comments powered by Disqus