Honeybadger's Ben Curtis on bootstrapping, competing aganst VC-funded companies, and life managing a large Rails app
Exceptions happen. To everyone. For half a decade, Honeybadger has given sanity to the art of bug hunting, monitoring exceptions for Heroku, eBay, DigitalOcean, and many more. I was able to steal some time from Ben Curtis, one of the Honeybadger co-founders, to talk about their origin story, life as a self-funded company, his favorite web framework in 2017, the infrastructure behind Honeybadger, and more.
What's the Honeybadger origin story?
We started Honeybadger in 2012. Starr (co-founder) and I were working together at a startup in Seattle where we were building a Ruby on Rails application. We were using an existing exception tracking service to track the exceptions happening in our application. There was really only one option at the time.
One day we got an error in our application, but when we attempted to view the information about the error in the UI, there were no details to be found. I emailed their customer support and said, “we had this error in our application, but there is no detail about it in your UI… can you tell me what’s going on?” I got a response back from them saying, basically, “yup, we see you reported an error to us, but the detailed information isn’t available.”
I was more than a little frustrated at having what I told them repeated back to me almost verbatim. I turned to Starr and said, “we should just build our own exception monitoring app.” We felt that developers deserved better service than what we had received, and that we could provide it to them. We bet that there were others out there who were also frustrated and that they would be willing to pay for a better alternative. So we started working on Honeybadger, and when we launched a few months later we found that we were right. We had paying customers on day one.
We love making developers' lives better. It's been a blast.
How many co-founders? What's the team like today?
Starr and I were joined by Josh shortly after we got started, bringing the number of co-founders to three. We’re still at three today.
Our goal is to minimize the need for humans as much as possible, so we don’t need to have more humans.
Many funded startups measure success by headcount. Your team has an opposite take on this.
We were inspired by Craigslist on this. I've always been impressed by what I heard about how they run their business - trying to maximize profit per employee - so we have set that as a goal for ourselves. We view it as a way to be efficient with our resources. If we keep that as a goal, then we should always be profitable, regardless of how many employees we have.
Has Honeybadger raised funding?
The only funding we’ve had is the money the three of us put into the business when we started. The rest of the investment has been sweat equity, and we’ve sustained the business through making our customers happy and charging their credit cards. We’re incredibly grateful for every customer we have because we’re living the bootstrapper’s dream.
I'm sure you've been approached by VCs. Why has your team resisted?
The main reason we haven't entertained the thought of outside investment is that we knew from the start that we didn't want to have any other boss besides our customers. We wanted to be able to run the business as we saw fit, without any outside pressure on us. Ultimately, our goal was to run a sustainable, profitable business with manageable growth, and that's just not compatible with going the VC route.
How does Honeybadger compete differently than its VC-funded competitors?
When you're a funded company it might make sense to drop $50k on a booth at a conference, but when it's your own money you might choose to spend it a little differently. You have to get creative with your marketing, and do the things that don't cost a lot of money, at least initially.
One thing we did early on that worked really well for us was promoted tweets. We wrote blog posts that would be of interest to Ruby developers, and we kept on top of other Ruby-related news and products, and we would tweet about those things regularly. Then we would pay to promote some of those tweets. As we started showing up in more people's twitter feeds, we would get more traffic coming our way, which would lead to more customers. It was a cheap way of generating awareness. Today we spend most of our marketing time on writing content for our blog or traveling to conferences.
What's your role today?
I spend a lot of my time working on operations, making sure the servers are happy and that we are scaling up as we add more customers and they send us more traffic. I also spend some time on writing code, marketing, and random business-related administrivia. We don’t have rigidly-defined roles at Honeybadger, so we all chip in on everything from time to time.
If you were starting Honeybadger again today, what language/framework would you build it in?
I’d probably choose Rails again. There are other great frameworks that have popped up since we’ve started, but I still love Ruby, and I prefer to spend my code-writing time with the language I love the best. We have dabbled with Go, Node, and Elixir, and any of those would be good choices if we were to start over.
What languages/frameworks have you seen the most growth in over the past year or so?
We’ve seen a lot of growth in the Elixir community over the past year, and I can see why. The syntax is comfortable for people like me who love Ruby, and the performance story is compelling. It’s also got that functional thing going for it, which is just cool.
What are your favorite SaaS tools?
I’m quite fond of Librato, since I need to keep track of the performance of our infrastructure. It makes it easy to track all kinds of metrics, get alerted when things go wrong, and create dashboards to quickly assess what’s going on across our stack. Intercom is another tool that I love, since it makes it easy for us to provide awesome support to our customers.
Mind sharing a bit about the tech stack / infrastructure behind HB?
We have a fair amount of Ruby code, from our monolithic Rails app that provides the UI and houses the Sidekiq workers that do just about everything that needs doing, to our Sinatra apps that handle the ingestion for our API and Heroku log drain endpoints. That code is running on Linux servers (I’m old-school ops, so we haven’t moved to containers yet) and is talking to Postgres and Cassandra. We use Redis heavily, not just for Sidekiq, but also for write caching, read caching, and so on, and we’re using AWS Lambda to feed documents to Solr Cloud.
It’s a bit of a handful for a small team to manage, but we automate as much as we can, like using Ansible for provisioning, Consul for service discovery, and AWS features like auto-scaling, etc. Any time we have something to add to the stack, we take the time to make sure it can be deployed and configured via scripts. Whenever we have a production issue, we try to make sure it won’t happen again by building more resiliency into the system or improving the automation around recovery.
Why hasn't containerization caught on with your team?
I like the idea of using a container for a Rails app to make deployment a little easier, but it's just not quite there yet for me. I did a quick experiment with deploying our Rails app to Elastic Beanstalk (using their Docker option) because I thought it might save me the time of setting up instances, load balancers, etc., manually. It was a fun experiment, but there are a number of fiddly bits that are specific to that environment, and I ended up feeling like I was doing more work than I would if I were just managing all the pieces myself. I suppose it would be more appealing if I was just starting out and didn't have years of experience running apps in production, but at this point the path of least resistance for me is just using Ansible and Capistrano.
What's the hardest technical problem you have to solve?
The hardest problem has been scaling our search, as we have to search across a lot of data that is quite varied. We’ve used Postgres’ full-text search, ElasticSearch back in the pre-1.0 days, and, most recently, Solr Cloud — I think we’ve completely rebuilt our search at least four times. I don’t think we’ve had to rework search due to mistakes, but rather because we just outgrew each iteration. In other words, all of the technologies we have tried worked great until they didn’t. The solution we have today definitely would have been overkill when were just starting out, and I’m guessing that it, too, stands a good chance of being replaced at some point.