"App Monitoring" Posts


From a spike in response time to a Git blame: our improved path to slow code

By Derek Bullet_white Posted in App Monitoring Bullet_white Comments Comments

Before Scout, we used several app monitoring services.

One of my biggest frustrations: I never found a way to select a spike in response time from an overview chart and view what's slow during that period.

Those spikes are begging to be clicked. We wanted to make that happen.

Introducing click-and-drag

See a spike in response time for your app? Just click-and-drag over the chart. Scout will show how many slow requests occured. You can then jump to a list of all slow requests aggregated by the endpoint.

Using an iPad? Pinch the spike. It's fun.

Git Integration

git integration

We'll be releasing our Git integration before General Availability on Nov 16th. Watch the video above for a preview of this in action.

Pricing

You can pick between per-server pricing ($59/server) and pre-request pricing (starting at $20 for the first 1M requests with automatic volume discounts).

Early Access

Signup for early access via our homepage.

Questions? Email sales@scoutapp.com.

 

App Monitoring Update

By Mark Bullet_white Posted in App Monitoring Bullet_white Comments Comments

It's been just over a month since we opened our early access period for Scout App Monitoring.

Whew! A lot has happened:

Read More →

 

StackProf: The Holy Grail of Rails Profiling

By Mark Bullet_white Posted in App Monitoring Bullet_white Comments Comments

Our Stackprof-inspired profiler, ScoutProf, is now in BETA. See our docs to get started.

stackprof

The holy grail of performance profiling is finding a tool that's safe to run in production. A tool that identifies slow code as it works "in the wild".

Profiling code locally is never as good as production profiling: the load is small, hardware specs are different, and I'd need to pull down a huge chunk of data to simulate things.

Enter StackProf, the production safe*, holy grail of Ruby profiling tools.

Read More →

 

Monitoring InfluxDB with Scout

By Derek Bullet_white Posted in App Monitoring Bullet_white Comments Comments

We're using InfluxDB in our new app monitoring service.

While InfluxDB hasn't reached 1.0 yet, it has loads of potential and has been holding up well during our BETA period. Don't worry, we'll talk more about InfluxDB in coming posts.

So, how are monitoring InfluxDB performance? Here's how we get an overview of our app performance. InfluxDB is one of the categories we track:

overview

When there's a slow request, we can dig into details, including viewing the actual InfluxDB query:

slow

Need some InfluxDB monitoring action?

Sign up for early access and ping us at apm.support@scoutapp.com to let us know you are using InfluxDB.

 

The making of app monitoring: the health dashboard

By Derek Bullet_white Posted in App Monitoring Bullet_white Comments Comments

We're battle-scarred devs building the focused app monitoring service we've always wanted. We're blogging about the adventure below.

Customers telling me our app is slow? I'm looking at a response time graph.

On the front page of Hacker News? I'm looking at requests per-second and response time on a graph.

Lots of things going wrong? Show me ALL the metrics.

The challenges with building a one-page dashboard of app health?

  • What's important to me today might not be tomorrow
  • I need to see all key metrics at once to ensure I'm not missing a correlation (ex: spike in response time and error rates)
  • Ability to magnify a metric on a chart for more details

The first step is admitting I have a problem

We track eight key health metrics for our applications:

  • Response Time by category (time spent in Ruby, Postgres, Elasticsearch, etc)
  • Throughput
  • Error Rate
  • Apdex
  • Capacity % (the utilization of our application worker processes)
  • App Instances (how many processes are serving our app across all of our nodes)
  • CPU Usage % (average cpu usage of the app on each node)
  • Memory Usage (average memory usage of the app on each node)

So, what are some approaches to help me get an at-a-glance view of app health?

Read More →

 

Reversing the GoDaddy-ification of application monitoring

By Derek Bullet_white Posted in App Monitoring Bullet_white Comments Comments

Scout is an "oops" company. We didn't build our product with the intention of turning it into a company, but we certainly can't imagine life without it today.

Scout was started out of frustration. The prospect of setting up and using a Nagios-like server monitoring solution was so terrifying, we'd rather build our own. We built a simple monitoring agent and an accompanying Rails-backed web interface and used it monitor our own apps. When debugging performance issues, we started sharing access to the app with our hosting provider, Rails Machine. They loved it and started using it.

We really enjoyed building the product, put a price tag on it, and over a bit of time, it became our full-time thing.

That frustration is back: app monitoring

Application performance monitoring (APM) products have a tendency to evolve into a GoDaddy-like experience.

The tools for monitoring apps are continually becoming more complex and difficult to use. It's the second law of thermodynamics applied to software. There's an ever increasing tendency toward disorder.

It's time to reset application monitoring.

From our own frustrations and those our customers have shared with us, it's clear app monitoring needs a craft brew-alternative: a polished, focused take on application monitoring. A product focused on solving performance issues as fast as possible and not overwhelming you with clutter.

We're building the craft brew of app monitoring

A bit ago, we decided to build an app monitoring product. We've got an awesome team dedicated to it and it's coming along fast. There's some core beliefs we're staring with:

  • Support multiple languages and frameworks. We know from experience that we're mixing together more languages and frameworks than ever before. It's key to view their performance from a single interface.
  • Easy time range diffs. The UI must be built to make it easy to compare deploys, config changes, or general trends as an application ages.
  • Context. How is performance for our highest-paying customers? Is a performance issue impacting everyone or a subset of customers? Is slowness primarily associated with one database node? Make it easy to apply the context that matters to you.
  • Aggregrate what's slow. We learn a lot from investigating slow requests. Rather than paging through metrics on individual slow requests, aggregate the call stacks of slow requests together. Apply the context from above. Know with certainty that an endpoint is slow because of a specific query for X% of your customers.

Sign up for our BETA

Get yourself on our early access list. We'll be inviting folks into APM ahead of our October launch. It's a great time to help shape the direction of Scout APM.

More to come

We'll be blogging about the product dev process right here, starting with the design decisions behind our application health dashboard:

Scout APM

Follow us on Twitter for the highlights and signup for early access.

 

Older posts: 1 2