Deploying Faktory to AWS Fargate

February 13 Bullet_white By Bradley Price Bullet_white Comments Comments

Looking for a fresh, 2018 approach to deploying a Rails app to AWS? We've partnered with DailyDrip on a series of videos to guide you through the process. We're covering how to Dockerize a Rails app, AWS Fargate, logging, monitoring, setting up load balancing, SSL, CDN, and more.

In today's video, we're setting up a background job server for our Rails app. There are several implementations we can choose from, including delayed_job, resque, and Sidekiq, to name a few.

However, today we'll be using Faktory, which was created by the father of Sidekiq, Mike Perham.

Getting Started

Before we jump into configuring Faktory, let's take a brief moment to talk about its implementation and some key components.

  • Faktory daemon: The Faktory daemon is where jobs are stored, scheduled, etc.
  • Client: The client is a resource that creates a job. In our case, it will be our Rails app.
  • Worker: The worker is a resource that fetches a job. In our case, this will be a process we start with bundle exec faktory-worker.

If you've used Sidekiq, the process will be very familiar. The biggest difference is the separation of the job storage, which is no longer handled with Redis.

Aside - why not just use Sidekiq?

For our demo app, Sidekiq would work fine. However, there are a couple of reasons where Faktory gets the nod:

  • Faktory comes in a self-contained box, which includes all of the major features (the datastore, web UI, etc) within the server itself.
  • Faktory handles some of the Sidekiq enterprise-only extras for free.

Finally, while we're using Faktory in a Ruby on Rails app, it is language-agnostic, so we could use Faktory across multiple languages at a later date.

Now that we have a better idea of how Faktory works, let's get started.

Setup the Faktory service

First, we need to get Faktory running locally. There are 2 options: using Homebrew or a Docker container. Since our app is Dockerized and will be running in ECS, it makes sense to use the Docker version.

➜ docker pull contribsys/faktory:latest
Status: Downloaded newer image for contribsys/faktory:latest
➜  docker run --rm -it -v faktory-data:/var/lib/faktory -p -p contribsys/faktory:latest /faktory -b -e development
Faktory 0.7.0
Copyright © 2018 Contributed Systems LLC
Licensed under the GNU Public License 3.0
I 2018-01-22T01:20:00.451Z Initializing storage at /root/.faktory/db
I 2018-01-22T01:20:00.550Z PID 1 listening at, press Ctrl-C to stop
I 2018-01-22T01:20:00.550Z Web server now listening on port 7420

Now, we should be able to hit and see the Web UI.

Faktory Web UI

Manually process our first job

To interact with the Faktory service, we need to setup the faktoryworkerruby gem.

Add this to our Gemfile and bundle.

  gem 'faktory_worker_ruby'

Jump into a Rails console and process our first job.

➜  rails c
2.4.1 :001 > client =
=> #, @sock=#>
2.4.1 :002 >["faktory"]["total_enqueued"]
=> 0
2.4.1 :004 > client.push({jid: SecureRandom.hex, jobtype: 'test', args: []})
=> nil
2.4.1 :005 >["faktory"]["total_enqueued"]
=> 1
2.4.1 :007 > client.fetch('default')
=> {"jid"=>"a3d92e499a558eecbb535037f11d85f6", "queue"=>"default", "jobtype"=>"test", "args"=>[], "priority"=>5, "created_at"=>"2018-01-22T02:17:41.502703208Z", "enqueued_at"=>"2018-01-22T02:17:41.502791764Z"}
2.4.1 :008 > client.ack(_["jid"])
=> true
2.4.1 :009 >["faktory"]["total_enqueued"]
=> 0
2.4.1 :010 >["faktory"]["total_processed"]
=> 1

That seems pretty simple! Notice that we didn't even have to set any environment variables or configuration options. By default, the client will use the default settings of if options are not configured.

Now, let's move on to setting up a background job in our code.

Creating a background job

Our demo Rails app is very small and has limited functionality, but there's one feature that allows you to share a checklist with another person. If that user doesn't exist in the system, it will create a user account and send them an email. We'll refactor that to use a background worker to send those emails.

Let's start by creating a new job.

Now, let's go to our Checklist model and set it up to use our new mailer job.

As you can see, we're passing,, and as the last 3 arguments—the arguments that our mailer is expecting.

Configuring mail settings

The last thing we need to do before testing our application is configure our email settings.

For testing locally, I'm going to use the letter_opener gem.

All we need to do to configure letter_opener is add the gem.

gem "letter_opener", :group => :development

Then, update our config/environments/development.rb.

# config/environments/development.rb

config.action_mailer.delivery_method = :letter_opener

For our production environment, I am going to use my Google account for testing purposes.

Note: for actual production usage, we'd want to swap this out for Sendgrid, Amazon SES, or another mail provider.

Testing background jobs

The first thing we need to do is make sure our Rails app and Faktory Docker container are running.

Let's share our checklist.

checklist shared

Ok, now let's take a look at our Faktory dashboard and see if the job got queued.

Faktory queued mailer job

Ok. It looks like the Rails app has sent the job to Faktory and it's just hanging out, waiting for a worker to pick it up.

So, let's jump back over to our terminal and start up a worker.

➜  faktory-worker


2018-01-22T03:19:19.093Z 78631 TID-oufvyinxc INFO: Running in ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-darwin16]
2018-01-22T03:19:19.093Z 78631 TID-oufvyinxc INFO: See LICENSE and the LGPL-3.0 for licensing details.
2018-01-22T03:19:19.093Z 78631 TID-oufvyinxc INFO: Starting processing, hit Ctrl-C to stop
2018-01-22T03:19:19.154Z 78631 TID-oufw5igz0 MailerJob JID-f46e96a972183ddbb24696c3 INFO: start
2018-01-22T03:19:19.927Z 78631 TID-oufw5igz0 MailerJob JID-f46e96a972183ddbb24696c3 INFO: done: 0.773 sec

As we can see, it has processed our job.

So, let's move on and set up Faktory in production.

Setting up the Faktory Service on AWS Fargate

First, we need to push a Faktory image to our repository.

For this, we need to create a new repository, tag the Docker instance, and push it up.

➜  $(aws ecr get-login --no-include-email)
➜  aws ecr create-repository --repository-name=contribsys/faktory
    "repository": {
        "registryId": "154477107666",
        "repositoryName": "contribsys/faktory",
        "repositoryArn": "arn:aws:ecr:us-east-1:154477107666:repository/contribsys/faktory",
        "createdAt": 1516593012.0,
        "repositoryUri": ""
➜  docker tag contribsys/faktory
➜  docker push

Next, we need to jump into the AWS console and set up a new task definition.

For my task definition, I'm going to use Fargate. I'm going to make these changes:

  • Name: faktory-task
  • Task Role: ecsTaskExecutionRole
  • Task Memory: .5GB
  • Task CPU: .25 vCPU

Scroll down a bit and click Add Container. Here I'm going to make these changes:

  • Name: faktory
  • Image: (yours will differ)
  • Entrypoint: sh,-c,/faktory -b -e production
  • Working directory: /
  • ENV

You should have something similar to this:

Faktory service setup

Scroll down and click Add, then click Create.

Now, we need to set up the Faktory service. So, let's head over to Clusters > Produciton and click Create in the Services tab.

From here I'm going to make these changes:

  • Launch type: Fargate
  • Task Definition: faktory-task:1
  • Platform version: LATEST
  • Cluster: Produciton
  • Service name: faktory
  • Number of tasks: 1

You should have something similar to this:

Faktory service form step 1

Now, let's go to the next step and setup our VPC and security group settings. For our Cluster VPC, we want to choose the same VPC that our produciton web app is in (you might only have one available) and choose the us-east-1a subnet. Also, we do want to assign a public IP, since we'll use that to access the Faktory dashboard.

Don't worry about setting anything up for a load balancer; we won't need to configure that for our service.

You should have something similar to this: Faktory service form step 2

Once you've verified those changes, click Next Step. This should take us to the Auto Scaling page, but we don't want to scale up, since we only need one Faktory container running. So, let's click Next Step and verify our configuration. Once we've verified everything, let's click Create Service.

While we wait for our Faktory service to spin up, we can take care of another problem. We need to set some inbound rules for our new service.

Let's jump back over to our services page by clicking Clusters in the left nav and on our Produciton cluster. We need to find our load balanced produciton service and click on it.

We need to take note of the security group and move back to the Produciton cluster and click on our new Faktory service.

Once there, we need to click on our security group. Now, we should be in the VPC dashboard, where we can change the inbound rules for our security group.

So, I'm going to select the security group for my Faktory service, and click on the Inbound Rules at the bottom.

We need to remove the existing rule and add 2 new rules:

  • Allow port 7419 access from our Rails app.
  • Allow port 7420 access from our IP address (dashboard) . Faktory security group updates Once we've added those rules, we can click Save and go back to our Faktory service.

Once we're in the ECS dashboard and looking at our Faktory service, we can select our running task, and get the public IP address and make sure we can access our dashboard.

It will ask you for a username and password. You don't have to enter a username, but you do need to enter the same password you used to set up your task definition.

If we've set everything up correctly, we should be able to see our dashboard.

Faktory dashboard


As in most of our other videos in this series, we've touched on quite a few topics.

  • We updated our Rails app to send mail via a background job
  • We pulled down the Faktory Docker image, pushed it up to our own ECR repository, then configured a task definition and set up a Faktory service.

In the next episode we will start setting up our worker service.


Our full series on deploying to AWS

Get notified of new posts.

Once a month, we'll deliver a finely-curated selection of optimization tips to your inbox.


comments powered by Disqus