2014 was a year of major updates to Scout. Some stats on what’s been a fun year:
- Scout Realtime - In January, we released our open-source standalone realtime monitoring agent. Scout Realtime was the second most popular repository on Github during its release week (trailing Popcorn - we'll take the runner-up spot behind free movie streaming).
- New Server View UI - In February, we released our new d3-powered server view interface and introduced automatic process monitoring. We think there's no better single-page view of your server's health.
- New Dashboards UI - July brought our new dashboards UI. From quick ad-hoc charts to a persistent display on an external monitor, we think there's no better way to view your key metrics than your new dashboards.
- New API - in September we debuted our new RESTful API.
- New Realtime Charts BETA - in December we announced our new realtime charts experience. Viewing every-second-updating charts has never been easier.
Blog Post Highlights
Thanks for all of your support, feedback, and hard-earned money in 2014. Our mission of lightweight, non-enterprisey server monitoring continues next year.
Migrating backend search technologies on a high-throughput production site is no easy task, but Vector Media Group was recently faced with this decision. With a popular client site struggling under the load of complex MySQL full-text search queries, they recently switched to Elasticsearch.
I spoke with Matt Weinberg to learn how the migration went. Was the switch to Elaticsearch worth the effort?
How did you handle search before Elasticsearch?
We created a custom search using MySQL queries and implemented it into our CMS for the project, ExpressionEngine.
What were the problems with this approach?
To support full-text search, we needed to use the MySQL MyISAM storage engine. This has major downsides, the primary one being full table locks: when a table is updated, no other changes to that table can be performed.
Our tables have considerable update activity, so this would result in sometimes-significant performance issues.
When your database server is under heavy load, an application server is running out of memory, or you are rolling out a major deploy, you want instant performance data. In these times, it's about the present, not the past.
We're happy to introduce in-place-on-your-dashboard, every-second-updating realtime charts.
It's a seamless transition from historical to now.
- Our new scoutd monitoring agent. The new agent is a daemon (vs. running Scout via Cron).
- Ruby 1.9.2+
- Ubuntu 12.04+, Centos/Redhat 6+, or Fedora. We'll be adding support for more distros.
Email us for access
We're gradually rolling out the new realtime UI + scoutd to gather feedback. To try our new realtime charts, email firstname.lastname@example.org with your account name.
The Linux kernel is an incredible circus performer, carefully juggling many processes and their resource needs to keep your server humming along. The kernel is also all about equity: when there is competition for resources, the kernel tries to distribute those resources fairly.
However, what if you've got an important process that needs priority? What about a low-priority process? Or what about limiting resources for a group of a processes?
The kernel can't determine what CPU processes are important without your help.
Most processes are started at the same priority level and the Linux kernel schedules time for each task evenly on the processor. Have a CPU intensive process that can be run at a lower priority? Then you need to tell the scheduler about it!
There are at least three ways in which you can control how much CPU time a process gets:
- Use the
nice command to manually lower the task's priority.
- Use the
cpulimit command to repeatedly pause the process so that it doesn’t exceed a certain limit.
- Use Linux’s built-in control groups, a mechanism which tells the scheduler to limit the amount of resources available to the process.
Let's look at how these work and the pros and cons of each.
We're happy to announce our first enhancement to v2 of our API: chart markers.
Use chart markers to note significant events like production deploys, infrastructure upgrades, performance enhancements, etc.
Creating a marker is simple:
curl -X POST --data "notes=deployed production" https://scoutapp.com/api/v2/KEY/markers
Markers are applied to all charts on a dashboard.
Adding a marker after a Capistrano Deploy
Here's an example of a simple hook to create a marker after a deploy if you are using Capistrano:
after "deploy:restart", "deploy:mark_release_via_api"
task :mark_release_via_api, hosts:"app1.acme.com" do
run_locally %Q(curl --data "notes=deployed production" https://scoutapp.com/api/v2/API_KEY/markers)
Many of you come to Scout from Nagios. We'd like to make the transition to Scout easier.
How about having our agent run your Nagios Plugins? To try it, SSH onto your server and run:
gem install scout --pre
Then, in your crontab entry for Scout use the
* * * * * scout KEY --nagios
This will run the commands defined in your
/etc/nagios/nrpe.cfg file. If your config file is somewhere else (or you only want to run a subset of commands) you can provide the path to that file:
* * * * * scout KEY --nagios /.scout/nagios.cfg
Data from Nagios plugins will show up w/each of your servers and can be placed on charts, just like any other metric in Scout.
Send your feedback to email@example.com. We'll give you a vintage Scout T-Shirt for your thoughts.
You try creating a file on a server and see this error message:
No space left on device
...but you've got plenty of space:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 10321208 3159012 6637908 33% /
Who is the invisible monster chewing up all of your space?
Why, the inode monster of course!
What are inodes?
An index node (or inode) contains metadata information (file size, file type, etc.) for a file system object (like a file or a directory). There is one inode per file system object.
An inode doesn't store the file contents or the name: it simply points to a specific file or directory.
During a team camp among the lofty peaks of Breckenridge, Colorado, we talked a lot about the future of Scout and monitoring in general. Big mountains and nature have a way of doing that.
One thing that was getting our nerd juices flowing: Go.
At Monitorima in May, it was clear that Go was becoming the language of choice for performant yet fun-to-develop daemons.
After our morning hike fueled us with crip mountain air, we said: why not build a light Scout daemon in Go? As in, right this afternoon?
Our first-generation API was feeling pretty dated. Time for an update!
|less like this:
||more like this:
Skip to the new API documentation: https://scoutapp.com/info/api
A Modern RESTful API with Token-based Authentication
To get started, your account key -- the same one you use in setting up the agent -- is also an API key. You can create and revoke additional API keys at any time through the web interface.
- We use HTTP verbs and return codes with respect.
- Everything in the new API can be performed via cURL.
- All data is returned in JSON format.
The Quickest of Quick-starts
To get recent alerts:
To disable notifications on a server:
curl --data "notifications=false" https://scoutapp.com/api/v2/YOURAPIKEY/servers/HOSTNAME
Room to Grow
With the updated API, there's a foundation to add more endpoints as needed. If there's something you need to make Scout play nicely with your other systems, let us know!
Learn more about our API at https://scoutapp.com/info/api
Your high-powered server is suddenly running dog slow, and you need to remember the troubleshooting steps again. Bookmark this page for a ready reminder the next time you need to diagnose a slow server.
Get on "top" of it
Linux's top command provides a wealth of troubleshooting information, but you have to know what you're looking for. Reference this diagram as you go through the steps below: