Our Logstash / Kibana setup has four main components:

  • Logstash: The server component of Logstash that processes incoming logs
  • Elasticsearch: Stores all of the logs
  • Kibana: Web interface for searching and visualizing logs
  • Logstash Forwarder: Installed on servers that will send their logs to Logstash, Logstash Forwarder serves as a log forwarding agent that utilizes the lumberjack networking protocol to communicate with Logstash

install Java 7

yum -y install java-1.7.0-openjdk

install Elasticsearch

import the Elasticsearch public GPG key into rpm

rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch

Create and edit a new yum repository file for Elasticsearch:

vi /etc/yum.repos.d/elasticsearch.repo
name=Elasticsearch repository for 1.1.x packages

install elasticsearch

yum -y install elasticsearch-1.1.1

Elasticsearch is now installed. Let’s edit the configuration

script.disable_dynamic: true
network.host: localhost
discovery.zen.ping.multicast.enabled: false

start Elasticsearch

service elasticsearch restart

add Elasticsearch on boot up

chkconfig elasticsearch on

Install Kibana

cd ~; curl -O https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz
tar xvf kibana-3.0.1.tar.gz
vi ~/kibana-3.0.1/config.js

elasticsearch: "http://"+window.location.hostname+":80",

mkdir -p /usr/share/nginx/kibana3
cp -R ~/kibana-3.0.1/* /usr/share/nginx/kibana3/

Install Nginx

rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm

yum -y install nginx

Download the sample Nginx configuration from Kibana’s github repository to your home directory:

cd ~; curl -OL https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf

vi nginx.conf
  server_name FQDN;
  root  /usr/share/nginx/kibana3;

copy it over your Nginx default server

cp ~/nginx.conf /etc/nginx/conf.d/default.conf

install apache2-utils so we can use htpasswd to generate a username and password

yum install httpd-tools-2.2.15

Then generate a login that will be used in Kibana to save and share dashboards

htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd user

service nginx restart
chkconfig --levels 235 nginx on

Install Logstash

vi /etc/yum.repos.d/logstash.repo
name=logstash repository for 1.4.x packages

Install Logstash 1.4.2

yum -y install logstash-1.4.2

Generate SSL Certificates

Generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/…)

cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let’s complete our Logstash configuration.

Configure Logstash

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

Let’s create a configuration file called 01-lumberjack-input.conf and set up our “lumberjack” input (the protocol that Logstash Forwarder uses):

vi /etc/logstash/conf.d/01-lumberjack-input.conf

input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"

This specifies a lumberjack input that will listen on tcp port 5000, and it will use the SSL certificate and private key that we created earlier.

Now let’s create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:

vi /etc/logstash/conf.d/10-syslog.conf

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

This filter looks for logs that are labeled as “syslog” type (by a Logstash Forwarder), and it will try to use “grok” to parse incoming syslog logs to make it structured and query-able.

we will create a configuration file called 30-lumberjack-output.conf:

vi /etc/logstash/conf.d/30-lumberjack-output.conf

Insert the following output configuration:

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }

Save and exit. This output basically configures Logstash to store the logs in Elasticsearch.

With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.).

If you want to add filters for other applications that use the Logstash Forwarder input, be sure to name the files so they sort between the input and the output configuration (i.e. between 01 and 30).

Restart Logstash to put our configuration changes into effect

service logstash restart

Set Up Logstash Forwarder

Copy SSL Certificate and Logstash Forwarder Package

On Logstash Server, copy the SSL certificate to Server (substitute with your own login):

# scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp

Install Logstash Forwarder Package

# cd ~; curl -O http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm
# rpm -ivh ~/logstash-forwarder-0.3.1-1.x86_64.rpm

install the Logstash Forwarder init script

# cd /etc/init.d/;curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
# chmod +x logstash-forwarder

The init script depends on a file called /etc/sysconfig/logstash-forwarder.

# curl -o /etc/sysconfig/logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig

# vi /etc/sysconfig/logstash-forwarder
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"

copy the SSL certificate into the appropriate location (/etc/pki/tls/certs)

# cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Configure Logstash Forwarder

# vi /etc/logstash-forwarder

substituting in your Logstash Server’s private IP address for logstash_server_private_IP

  "network": {
    "servers": [ "logstash_server_private_IP:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  "files": [
      "paths": [
      "fields": { "type": "syslog" }

add the Logstash Forwarder service with chkconfig

chkconfig --add logstash-forwarder

start Logstash Forwarder to put our changes into place

service logstash-forwarder start

Application: Nginx

vi /etc/logstash-forwarder

Add the following, in the "files" section, to send the Nginx access logs as type "nginx-access" to your Logstash server:

      "paths": [
      "fields": { "type": "nginx-access" }

Save and exit. Reload the Logstash Forwarder configuration to put the changes into effect:

service logstash-forwarder force-reload

Logstash Patterns: Nginx

vi /opt/logstash/patterns/nginx

Then insert the following lines:

NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGINXACCESS %{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:"(?:%{URI:referrer}|-)"|%{QS:referrer}) %{QS:agent}

Logstash Filter: Nginx

vi /etc/logstash/conf.d/11-nginx.conf

filter {
  if [type] == "nginx-access" {
    grok {
      match => { "message" => "%{NGINXACCESS}" }

service logstash restart