One agent. Minimal overhead. Robust language support. A unified monitoring solution for your servers and metrics.
We've added StatsD support to our monitoring agent. With Scout, you are just minutes away from StatsD-backed charts and alerts. Use StatsD to report code execution times, user signup rates, and more.
StatsD generated metrics are first class citizens in Scout - coexisting with every metric available via Scout.
Don't just take our word for it, Scout customer Martin Kelly had this to say:
It took about an hour to get all of our prod and preprod custom metrics into Scout and displayed on our dashboard. Both the client and project manager were very happy!"
Why StatsD + Scout?
Prior to today, all metrics in Scout were generated by our agent (monitoring system resource usage) or plugins (monitoring services). This works great for sampling metrics, but it's not a great fit for event-based metrics (ex: tracking user signups, response times, etc).
StatsD is a great fit for event-based metrics, but rolling your production-grade setup for StatsD is involved. We also want to see all of our metrics (and configure alerting) from a single app.
We tested StatsD+Scout internally first, loved it, and rolled out to customers during a preview stage. Today, StatsD is battle-tested and ready for your metrics.
Quick tip: replacing metric logging with StatsD
First time with StatsD? Here's a tip: if you are logging a metric, it probably makes more sense to send it StatsD.
logger.warn "Error Occurred!"
...which gives you ready-to-go charts and alerting (ex: alert when the error rate exceeds 50 errors/min).
Scout is an "oops" company. We didn't build our product with the intention of turning it into a company, but we certainly can't imagine life without it today.
Scout was started out of frustration. The prospect of setting up and using a Nagios-like server monitoring solution was so terrifying, we'd rather build our own. We built a simple monitoring agent and an accompanying Rails-backed web interface and used it monitor our own apps. When debugging performance issues, we started sharing access to the app with our hosting provider, Rails Machine. They loved it and started using it.
We really enjoyed building the product, put a price tag on it, and over a bit of time, it became our full-time thing.
That frustration is back: app monitoring
Application performance monitoring (APM) products have a tendency to evolve into a GoDaddy-like experience.
The tools for monitoring apps are continually becoming more complex and difficult to use. It's the second law of thermodynamics applied to software. There's an ever increasing tendency toward disorder.
It's time to reset application monitoring.
From our own frustrations and those our customers have shared with us, it's clear app monitoring needs a craft brew-alternative: a polished, focused take on application monitoring. A product focused on solving performance issues as fast as possible and not overwhelming you with clutter.
We're building the craft brew of app monitoring
A bit ago, we decided to build an app monitoring product. We've got an awesome team dedicated to it and it's coming along fast. There's some core beliefs we're staring with:
Support multiple languages and frameworks. We know from experience that we're mixing together more languages and frameworks than ever before. It's key to view their performance from a single interface.
Easy time range diffs. The UI must be built to make it easy to compare deploys, config changes, or general trends as an application ages.
Context. How is performance for our highest-paying customers? Is a performance issue impacting everyone or a subset of customers? Is slowness primarily associated with one database node? Make it easy to apply the context that matters to you.
Aggregrate what's slow. We learn a lot from investigating slow requests. Rather than paging through metrics on individual slow requests, aggregate the call stacks of slow requests together. Apply the context from above. Know with certainty that an endpoint is slow because of a specific query for X% of your customers.
Instrumenting our application with StatsD is easy, especially when we just stick to Counters and Gauges. These metrics return just a single value when implemented. When you get to Timers, however, StatsD steps up it's game and returns eight metrics.
So let's explore the curious case of the timing metric. What do all these metrics mean? How can we use this for instrumenting our application?
Please join us in welcoming Chris Schneider to the Scout team!
We've had our eye on Chris for awhile now, and we couldn't be more excited about him joining the team. Nearly a Fort Collins native (rare around these parts), Chris joins us with over 15 years of development experience - with the majority of his time spent in mission critical Rails apps. He's a programmer's programmer. Chris has hosted the Coding Hangout in Fort Collins for over a year now. Chris loves to teach and we're all blown away by his mad VIM skills.
Chris will be our lead-developer on our new Scout Application Monitoring service,. His insight and experience will be invaluable in making this product a world-class application monitoring solution.
In his free time, Chris can normally be found crushing trivia at a local tavern, banging out some Haskell or playing games with friends.