4 ways to get the most out of your Rails logs

December 09 Bullet_white By Derek Bullet_white Comments Comments

Logging is the lowest common denominator of monitoring - it's low effort and low overhead. You can put ANYTHING into a log file (sans animated GIF memes - save that for Slack).

The downside of being a logging addict: logs can quickly become a noisy mess. Digging through logs while debugging a critical issue can become a hair-pulling experience.

Don't fret - there are some terrific tools available today to turn your log streams into beautiful, useful data sources.

Here are 4 ways to get the most out of your Rails logs:

1. Use Lograge to cleanup your logs

Rails logs are noisy. Production Rails apps, while running multiple processes to serve the app across multiple hosts, will suffer from this common problem:

rails logging

The issue? Rails logs one entry when a request is received and another when it completes. If more than one process is serving your Rails app, you can't correlate the "completed" line with the initial request.

The Lograge gem solves this problem by generating a single line per request:

method=GET path=/jobs/833552.json format=json controller=jobs action=show status=200 duration=58.33 view=40.43 db=15.26

Pro Tip: by default, Lograge doesn't log request parameters, which makes the logs useless for a lot of troubleshooting. Add request parameters to your logs like so:

config.lograge.custom_options = lambda do |event|
  params = event.payload[:params].reject { |k| %w(controller action).include?(k) }
  { "params" => params }
end

2. Enable dynamic log tuning to debug issues

Sometimes we'll run into an issue processing metrics for a particular monitored app. Rather than doing a full deploy to amp up the logging, we use Redis and a Cap task to selectively enable more verbose logging for specific apps:

if REDIS.sismember("extra_logging_app_ids", app.id)
  Rails.logger.info "App ID #{app.id} Checkin: #{payload.inspect}"
end

Here's our Capistrano code for enabling verbose logging:

namespace :extra_logging do
  def app_id
    unless id = ENV["APP_ID"]
      puts "\n\n*** set ENV variable APP_ID to use this task ***\n\n"
      exit(1)
    end
    id
  end

  desc "Start collecting detailed ENV['APP_ID'] checkins"
  task :enable do
    on roles(:app), in: :parallel do
      execute "redis-cli sadd extra_logging_app_ids #{app_id}"
    end
  end
end

3. Aggregate your logs

If you have multiple application servers behind a load balancer, experience an issue, and need to dive into logs, which application server to you SSH into?

That's right - you don't know!

This is where log aggregation comes in. Send your logs to a central logging server, timestamping your logs and adding the hostname.

We covered some options for this in a previous post. To recap, you have two options:

  1. Use a paid service like Papertrail or Loggly.
  2. Maintain your own open source solution. The ELK stack is a common choice.

We use the ELK stack (ElasticSearch, Logstash, and Kibana).

Here's how our logs look in the Kibana UI:

kibana

4. Use Structured Logs

A little bit of structure can go a long way - rather than an unorganized stream of consciousness, your structured logs can behave like a well-maintained Wiki. This really only makes sense if you have large, complex logs and are implementing log aggregration with a tool like the ELK stack. Otherwise your raw logs will actually become LESS readable.

Here's how our JSON-formatted Logstash logs look:

{"method":"GET","path":"/apps/10001213/data.json","format":"json","controller":"apps","action":"data","status":200,"duration":1731.99,"view":1718.51,"db":1.89,"params":{"id":"165","format":"json"},"@timestamp":"2015-12-04T17:37:36.906+00:00","@version":"1","message":"[200] GET /apps/165/data.json (apps#data)","severity":"INFO","host":"ip-10-1-0-402"}
{"message":"Enqueued UpdateEsAttrs (Job ID: ccf8e877-4bd3-445c-8682-5e41f5ab64d3) to Sidekiq(default) with arguments: 204","@timestamp":"2015-12-04T17:37:36.913+00:00","@version":"1","severity":"INFO","host":"ip-10-1-0-402","tags":["ActiveJob"]}
{"message":"[2015-12-04 17:37:36 +0000] app reported 0 slow requests.","@timestamp":"2015-12-04T17:37:36.914+00:00","@version":"1","severity":"INFO","host":"ip-10-1-0-402"}

As a raw log, it's not useful. However, put this into the ELK stack and the magic happens:

kibana

I'm now able to dig through my logs like never before:

  • Easily filter by field values (HTTP method, path, format, controller, etc) without complex grep regular expressions.
  • Choose what fields to display in the results to reduce noise
  • Save common searches
  • View a chart timeline of activity
  • View histograms of field values (ex: 10% of response codes are 500 errors)

Also See

Once your logs are in JSON format, you can also visualize and data-science them with Jupyter Notebook. See our post on Visualizing Rails logs With Jupyter Notebook.

Your suggestions?

Share how logging secrets in the comments. I'll update the post with any golden nuggets.

TL;DR

Make it easy to pull out actionable information from your Rails logs with the following 4 steps:

  1. Use Lograge to collapse Rails requests into a single line in your log file.
  2. Add dynamic log tuning to make debugging critical issues easier.
  3. Send your logs to a central aggregation service.
  4. Use structured logs.

Subscribe for more performance tips

Want more Rails performance tips like this delivered monthly to your inbox? Just put your email into the sidebar form!

Get notified of new posts.

Once a month, we'll deliver a finely-curated selection of optimization tips to your inbox.

Comments

comments powered by Disqus