Installing the ELK stack on Azure

The “ELK stack” is an open source product offering used for indexing and searching server logs. This combines three open source tools (Elastic Search, LogStash, Kibana), which seem to be maintained by the company behind Elastic Search. Elastic Search is the most well known, and handles full text indexing. Logstash handles the ETL process that puts logs into elastic search, and Kibana is a search / graphing UI. In my limited experience, Kibana looks a lot like Splunk, but since this is built on open source tools, it is potentially cheaper to run.

I’ve set this up to monitor several WordPress sites (including this site), which is running on a Linode VM that I set up several years ago, and some servers that run background jobs. I’ve set up some security monitoring to prevent hacking, but this seems like a good way to monitor the whole system in case of a database crash or other incident.

While “ELK” started out as three components, it looks like more are becoming available, like a library called Filebeat, which can pump logs from Windows and Unix servers directly to ElasticSearch’s HTTP API.

The guide below shows how to set up this stack on Azure – because it’s relatively easy for developers to get BizSpark subscriptions, it is worth exploring. I based this on an guide for AWS1 which covers an earlier version of the applications involved, but I’ve used the latest versions of all libraries currently available, and fixed several issues I ran into. The official ELK documentation2 is quite good as well, and it appears that a lot of effort is going into making this work well, which bodes well for the future.

To get started, we to make a new VM in Azure. They recently upgraded the UI, which was a major improvement:
image2

One interesting feature of this new UI is that as you drill down, it opens windows to the left, so if you drill far down, it is easy to scroll back and forth and see how you got there.

For this environment, I’ve selected Ubuntu 15.04. It’s a huge help to have an SSH public key set up in advance as well.

When you set this up, there are many new choices for sizing:
image3

Microsoft has taken the novel step of estimating prices in advance, without making you use a complex spreadsheet / calculator app. Many of the guides for the ELK stack recommend separating each component into a separate virtual machine, to compartmentalize RAM use – the Elastic Search and Logstash are particularly heavy. If you’re going to be heavily dependent on this, it is a good idea to build multiple hosts, as you don’t want to lose the ability to search if the indexer gets DDOSed by high log traffic, as can happen when problems occur.

image4

After you pick the options you want, you’ll be dropped on the dashboard screen, which will update once the new machine is ready. Once this finishes, you’ll be able to modify properties of the machine (e.g. if you did the authentication wrong, etc)

image5

And you should be able to SSH in:

image6

Using SuperPutty makes it easy to open multiple connections, which you’ll want, however it is a bit of a pain to get Putty to remember the SSH key.

Once you get in, apply OS updates and install Java:

sudo apt-get update
sudo apt-get upgrade

sudo apt-get install openjdk-7-jre-headless
java –version

Install elasticsearch: (note I updated this to the 2.x series)

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch \
  | sudo apt-key add -
echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" \
  | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list

sudo apt-get update
sudo apt-get install elasticsearch

Then, install Elastic Search and set it to run as a service:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service

From this machine, you can then run a test of Elastic Search:

curl localhost:9200

Which should return a JSON payload:

{
  "name" : "Phantom Eagle",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.1.1",
    "build_hash" : "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
    "build_timestamp" : "2015-12-15T13:05:55Z",
    "build_snapshot" : false,
    "lucene_version" : "5.3.1"
  },
  "tagline" : "You Know, for Search"
}

By default, Elastic Search does not accept outside connections – we’ll look at this later in the article. At this point, Azure’s firewall is also in effect, which currently only allows SSH connections.

Logstash

Logstash can move log information from disk into Elastic Search. It seems to have a reputation for being a bit heavy, and, like Elastic Search, requires Java, which makes it un-ideal for some servers. Later in this article we’ll go through a Go based solution, which can be used instead of Logstash.

Like Elastic Search, we need to add logstash to the apt-get database, install it, and set it to run as a service:

echo "deb http://packages.elastic.co/logstash/2.1/debian stable main" \
  | sudo tee -a /etc/apt/sources.list

sudo apt-get update 
sudo apt-get install logstash

sudo update-rc.d logstash defaults 97 8
sudo service logstash start
sudo service logstash status

We can then see the status of the service:

sudo service logstash status

Which returns this:

logstash.service - LSB: Starts Logstash as a daemon.
   Loaded: loaded (/etc/init.d/logstash)
   Active: active (running) since Mon 2015-12-21 15:06:37 UTC; 7s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 20172 ExecStart=/etc/init.d/logstash start 
           (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/logstash.service
           +-20181 /usr/bin/java -XX:+UseParNewGC 
           -XX:+UseConcMarkSweepGC -Djava.awt.headless=tr...

Dec 21 15:06:37 elk systemd[1]: Starting LSB: Starts Logstash as a daemon.
Dec 21 15:06:37 elk logstash[20172]: logstash started.
Dec 21 15:06:37 elk systemd[1]: Started LSB: Starts Logstash as a daemon.

By default, Logstash writes it’s logs to /var/log/logstash/logstash.log.

To make Logstash index something, create the following file:

sudo vi /etc/logstash/conf.d/10-syslog.conf

Add the following – note the use of “hosts” (earlier versions used “host”):

input {
  file {
    type => "syslog"
    path => [ "/var/log/messages", "/var/log/*.log" ]
  }
}
output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
    hosts => "localhost"
  }
}

In order for logstash to see all the logs in /var/logs, you will need to add it’s user to the adm group:

sudo usermod -a -G adm logstash 

Without this, logstash seems to give up and log nothing, rather than loading the files it can see.

After making these changes, restart the service for this to take effect:

sudo service logstash restart

While we don’t have a UI to check, you can tail the logs to make sure there are no errors:

tail -f /var/log/logstash/logstash.log

Kibana

Kibana is a UI for searching and charting data in the logs.

Oddly, it gets installed differently than the other tools, and is a little more difficult, even though it seems to be maintained by the same parent organization.

wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
tar -xvf kibana*
cd kibana*
sudo mkdir -p /opt/kibana
sudo mv kibana-*/* /opt/kibana

You can see it work but running it, like so:

/opt/kibana/bin/kibana

If this works, you’ll get these logs in the console:

  log   [15:21:18.638] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
  log   [15:21:18.687] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [15:21:18.705] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
  log   [15:21:18.711] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
  log   [15:21:18.729] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
  log   [15:21:18.744] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
  log   [15:21:18.757] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
  log   [15:21:18.784] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
  log   [15:21:18.824] [info][listening] Server running at http://0.0.0.0:5601
  log   [15:21:23.853] [info][status][plugin:elasticsearch] Status changed from yellow to yellow - No existing Kibana index found

In order to hit the Kibana site from your local machine, you’ll need to a firewall rule to Azure. By default Kibana runs on :5601, so I re-mapped this to 80:

image7

I also restrict access to this port to my own IP. The Kibana UI has no authentication currently, so this stops unauthorized access. To solve this robustly, people seem to be using a combination of firewall rules or fronting Kibana with Nginx.

Now, you can finally see the search UI. Unfortunately I missed a screenshot here- you’ll get prompted to complete a setup step, which configures a pattern that identifies the names of indexes in Elastic Search. If everything up to this point worked, you’ll be able to just click through that, and start seeing settings screens:

image8

If you get an error that says “Unable to fetch mapping. Do you have indices matching the pattern”, this indicates that the logstash setup failed – when it runs, it creates indexes in Elastic Search. The best way to fix this is to go back and tail the logs for logstash.

Once that works, we should make Kibana a service:

cd /etc/init.d && \
  sudo wget https://raw.githubusercontent.com/akabdog/scripts/master/kibana4_init \
  -O kibana4

sudo chmod +x /etc/init.d/kibana4
sudo update-rc.d kibana4 defaults 96 9
sudo service kibana4 start

You should verify that github script yourself, or upload it manually, but for the sake of this example it seems to work. I had to change the user Kibana runs as (to my own account) in order for it to run correctly as a service, but changing the “USER=” line in /etc/init.d/kibana4.

Note that the provided script also requires elastic search to be running on 2900, so if you changed that earlier, you’ll need to do it here as well, in the Kibana config file:

sudo vi /opt/kibana/config/kibana.yml

image10

And now you have a nice logging tool. This appears to look a lot like splunk (notice how it’s figured out some fields you can search on).

This only monitors itself right now, so depending on your needs, continue on to monitoring WordPress with Logbeat or monitoring AppHarbor sites with ELK.

  1. http://logz.io/blog/install-elk-stack-amazon-aws/ []
  2. https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html []