Why put Rust in our Python Monitoring agent?

By Chris Bullet_white Comments Comments

Prior to adding Python performance monitoring, we'd written monitoring agents for Ruby and Elixir. Our Ruby and Elixir agents had duplicated much of their code between them, and we didn't want to add a third copy of the agent-plumbing code. The overlapping code included things like JSON payload format, SQL statement parsing, temporary data storage and compaction, and a number of internal business logic components.

This plumbing code is about 80% of the agent code! Only 20% is the actual instrumentation of application code.

So, starting with Python, our goal became "how do we prevent more duplication". In order to do that, we decided to split the agent into two components. A language agent and a core agent. The language agent is the Python component, and the core agent is a standalone executable that contains most of the shared logic.

Read More →

 

Your Rails & Elixir performance metrics 📈 inside Chrome Dev Tools

By Derek Bullet_white Comments Comments

Browser development tools - like Chrome Dev Tools - are vital for debugging client-side performance issues. However, server-side performance metrics have been outside the browser's reach.

That changes with the Server Timing API. Supported by Chrome 65+, Firefox 59+, and more browsers, the Server Timing API defines a spec that enables a server to communicate performance metrics about the request-response cycle to the user agent. When you use our open-source Ruby or Elixir server timing libraries, you'll see a breakdown of server-side database queries, view rendering, and more:

screen

Combined with the already strong client-side browser performance tools, this paints a full picture of web performance.

Get started with Scout's server timing libraries:

A Scout account isn't required, but it does make investigating slow response times more fun.

 

Scout's top-secret 4-point observability plan

By Derek Bullet_white Comments Comments

Observability: the degree to which you can ask new questions of your system without having to ship new code or gather new data.

Above is my slightly modified definition of observability, mostly stolen from Charity Majors in Observability: A Manifesto.

Observability is increasingly important. Modern apps and services are more resilient and fail in soft, unpredictable ways. These failures are too far on the edges to appear in charts. For example, an app may perform dramatically worse for one specific user that happens to have a lot of associated database records. This would be hard to identify on a response time chart for apps doing reasonable throughput.

However, understanding observability is increasingly confusing. Sometimes observability appears an equation: observability = metrics + logging + tracing. If a vendor does those three things in a single product, they've made your system observable.

If observability is just metrics, logging, and tracing, that's like saying usability for a modern app is composed of a mobile app, a responsive web app, and an API. Authorize.net has those things. So does Stripe. One is clearly more usable than the other.

I think it's more valuable to think about how your existing monitoring tools can be adapted to ask more questions. There's significant room for this in standalone metrics, logging, and tracing tools.

At Scout, we've been thinking about how we can help folks ask more performance-related questions about their apps. We're not building a custom metrics ingestion system. We're not adding a structured logging service. We're focusing on our slice of the world.

Below I'll share our top-secret observability plan.

Read More →

 

Introducing Python Performance Monitoring

By Derek Bullet_white Comments Comments

GitHub's State of the Octoverse 2017 revealed that Python is now the second-most popular language on GitHub, with 40 percent more pull requests opened in 2017. We couldn't help but notice. Today, we're excited to add Python to our existing Rails Monitoring and Elixir Monitoring agents.

screenshot

Our Python support is currently in tech preview: this means it is free to use, but also brand new and not yet feature-equivalent to our Ruby and Elixir monitoring agents. To start, we're monitoring Django applications (update: we've added support for Flask) and their SQL queries, views, and templates, but our library coverage will increase as we near general availability. You can follow along and suggest what you'd like to see next on GitHub.

Scout isn't the first company to monitor Python applications. What's special about Scout is the focus: we've put an incredible effort into surfacing the types of time-intensive soft failures that impact today's applications. Each of your customers has a unique experience with your application. Scout makes it easier to identify and understand why issues are impacting one subset of customers versus another.

Relevant links

 

Rollbar+Scout: a legit New Relic alternative

By Jason Bullet_white Comments Comments

The New Relic price tag goes up dramatically as your server footprint grows. This might not be an issue if you are utilizing New Relic’s full product suite, but what if you just care about error and performance monitoring?

In that case, there's a solution that offers richer features as an alternative to New Relic. When you combine Rollbar (errors) and Scout (performance), you're choosing two best-of-breed, focused products that actually play well together.

First, let’s see what’s special about Rollbar’s error monitoring capabilities. Then, we’ll show how to combine Rollbar and Scout to give a unified app stability experience.

Read More →

 

Setting up a Rails app for CodeBuild, CodeDeploy, and CodePipeline on AWS

By Derek Bullet_white Comments Comments

If you’ve followed along with our previous episodes, we’ve covered many different aspects of setting up a production service. We’ve used many different products to simplify the day-to-day operations of running and maintaining an application.

We’ve used Scout for monitoring our application, LogDNA for aggregating our logs, HoneyBadger for our exception handling, and a host of AWS services for running our services, managing our SSL certs, hosting our Docker images, etc.

But one thing we haven’t focused on tidying up yet is one of the places we spend most of our time. Building features, merging those features, running tests, and deploying that code.

In today’s episode, we’ll be talking about how to use a few AWS services — including CodeBuild, CodeDeploy, and CodePipeline — to stream line getting features in front of our customers.

Read More →

 

Older posts: 1 2 3 ... 67