Monitor AppHarbor with the ELK Stack

If you followed my previous post on setting up the ELK stack on Azure, you can configure AppHarbor to use it fairly easily.

You need to open a port on Azure (I’ve chosen 9000). You can set the inbound IP range to AppHarbors IPs1.
image1

Once you do this, make a new file in /etc/logstash/conf.d called AppHarbor.conf, and enter the following into it’s contents:

input {
  tcp {
    port => 9000
    type => syslog
  }
  udp {
    port => 9000
    type => syslog
  }
}
filter {
   grok {
        match => [ "message",  "%{GREEDYDATA:syslog_message}" ]
   }
   mutate { replace => { type => "appharbor" } }
}
output {
  elasticsearch {
    hosts => "localhost"
  }
}

While the AppHarbor logs supposedly conform to “syslog” format, I did not find this to be true – as of this time, I haven’t established the best log format yet, but this is easier to control than using “syslog” as an input type.

If you don’t do this, you will get a lot of messages tagged with “grokparsefailure_sysloginput”.

Adding this to AppHarbor is super-easy, start by selecting a logging configuration:
image2

Select the option to add a new log drain:
image3

Then, add the URL. This url needs to be in the format of “syslog://test.domain.com:9000”. If you use HTTP instead of “syslog”, you will get errors that say “(output buffer overflow)”, and lose all the useful data2
image4

Once you finish, this is what you’ll see:
image4

And you should be all set.

If you want to get the contents of these log messages into specific fields, I recommend the grok debugger.

  1. https://support.appharbor.com/kb/tips-and-tricks/application-server-ips-ec2-configuration []
  2. http://stackoverflow.com/questions/17532337/error-l10-output-buffer-overflow-when-writing-to-splunk-drain []

Leave a Reply

Your email address will not be published. Required fields are marked *