Breadcrumbs

1. Introduction

this objectives is to visualize my logs like fail2ban, vpn & co using elasticsearch/kibana/logstash

i will describe how to install and configure elasticsearch stacks on debian 8

 

this objective is to have an architecture looking like this as i deployed several time :

Elasticsearch & Suricata. I won't go into details about "App server" in this topic but i will probably add more and more over time. Suricata is only an example of an "App". 

thanks to the very good https://www.digitalocean.com i improve some of my architecture. I will try to give you another point of view about these kinds of architecture and how to deploy it.

elk-infrastructure

 

 

 

Here is sample i made at home for 2 home servers for syslog and fail2ban logs without real data analysis yet:

2. Installation

 2.1 prerequis for elaticsearch

i forgot this step the first time i made this tutorial and i lost 1hour ...

check you have java on your server :

java -version
echo $JAVA_HOME

if you don't please follow the instruction here to install Oracle JDK version 1.8.0.25 (recommanded by elastic.co, see source): oracle java

here is a quick tutorial howto:

  1. download java JDK tar.gz file (Java Development Kit) on oracle java
  2. find a FS where you have enough space (df -h) (for me it will be /var/java ... not very standard)
  3. extract it (for me :  tar -zvxf jdk-8u65-linux-x64.tar.gz -C /var/java/)
  4. then update your system like this :
update-alternatives --install /usr/bin/java java /var/java/jdk1.8.0_65/bin/java 100
update-alternatives --install /usr/bin/javac javac /var/java/jdk1.8.0_65/bin/javac 100

you can then check it's ok like this :

 update-alternatives --display java
 update-alternatives --display javac
 java -version

 

2.2 installation elasticsearch

 first of all we need to add the repository of elasticsearch. you can also download it and install it manually but the configuration files won't be in the same directory and it will be harder to keep it up to date

So, you need first to download and install the Public Signing Key:

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch |  apt-key add -


Save the repository definition to /etc/apt/sources.list.d/elasticsearch-{branch}.list:

echo "deb http://packages.elastic.co/elasticsearch/1.7/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-1.7.list

then you can launch the install :

apt-get update && apt-get install elasticsearch

then we need it to start at boot:

systemctl daemon-reload
systemctl enable elasticsearch.service

 

 2.2.1 security

So Elasticsearch is now installed, it is a restful server that can be managed through restful http command. To prevent anyone to control your elasticsearch we will set up some rules. Let's edit the configuration:

    vim /etc/elasticsearch/elasticsearch.yml


You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API.Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:

network.host: localhost


Save and exit elasticsearch.yml.

Now start Elasticsearch:


systemctl restart elasticsearch.service
systemctl status elasticsearch.service

 if like me, you have an issue starting it, you can try to debug it using a command like :

sudo -u elasticsearch bash -x /usr/share/elasticsearch/bin/elasticsearch

 it should give you some advice ... for my case i forgot to install java and i had no warning ! you can also look into the file : /usr/lib/systemd/system/elasticsearch.service

 

3. Kibana

 Kibana can be installed with a package manager by adding Elastic's package source list or manually as you prefer. i will show you how to make is using apt-get :

Create the Kibana source list as for elasticsearch. The only downside is that it will be installed into /opt even if it's not into another LV, but it doesnt really matter:

echo 'deb http://packages.elastic.co/kibana/4.1/debian stable main' | sudo tee /etc/apt/sources.list.d/kibana.list


Update your apt package database and install kibana:

 

  apt-get update && apt-get -y install kibana


Kibana is now installed. and you can start it using

service kibana start


you can know test it into your browser to ensure it's working: yourserverip:5601

you should have a page like that:

 

Kibana

 

4. Nginx reverse proxy

This part is optionnal it's only if you want to secure your access to your Kibana and be able to manage user using nginx instead of Kibana shield (expensive addon, unaffordable).

 

4.1 Nginx install and configuration

you can use nginx as a reverse proxy to authenticate user using several technics like a simple htpasswd, or using pam and/or ldap (see this article : https://blog.jocelynlagarenne.fr/jsn/index.php/integration/88-nginx-config-example-with-pam-ldap-auth)

i will describe here using htpasswd.

 

So first of all, open the Kibana configuration file for editing then change the host to localhost (listen only on localhost ... nginx!).

    vim /opt/kibana/config/kibana.yml
    [..]
    host: "localhost"

 

then use apt to install Nginx and Apache2-utils (some useful tools)

    sudo apt-get install nginx apache2-utils


Use htpasswd to create an admin user, called "asyoulikebutrememberit!" (you should use another name), that can access the Kibana web interface:

    sudo htpasswd -c /etc/nginx/htpasswd.users asyoulikebutrememberit



Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.

Now, backup the existing default server nginx, and open it to replace it by this example :

 

   cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.bak
   vim /etc/nginx/sites-available/default

 

    server {
        listen 80;
 
        server_name example.com;
 
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;
 
        location / {
            proxy_pass http://localhost:5601;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;        
        }
    }

 Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.

Now restart Nginx to put our changes into effect:

    sudo service nginx restart


Kibana is now accessible via your FQDN or the public IP address of your Logstash Server i.e. http://logstash_server_public_ip/. If you go there in a web browser, after entering the "asyoulikebutrememberit" credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let's get back to that later, after we install all of the other components.

 

4. Logstash

The Logstash package is available from the same repository as Elasticsearch, and we already installed that public key, so let's create the Logstash source list:

echo 'deb http://packages.elasticsearch.org/logstash/1.5/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list

 

you know the drill :

1
apt-get update && apt-get -y install logstash

 

Logstash is installed but it is not configured yet.

 

4.1. Generate SSL Certificates

Since we are going to use Logstash Forwarder to ship logs from our client servers to our Logstash/elasticsearch Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server. Create the directories that will store the certificate and private key with the following commands:

Option 1: IP Address

If you don't have a DNS setup—that would allow your servers, that you will gather logs from, to resolve the IP address of your Logstash Server—you will have to add your Logstash Server's private IP address to the subjectAltName (SAN) field of the SSL certificate that we are about to generate. To do so, open the OpenSSL configuration file:

    sudo vi /etc/ssl/openssl.cnf



Find the [ v3_ca ] section in the file, and add this line under it (substituting in the Logstash Server's private IP address):

subjectAltName = IP: logstash_server_private_ip


Save and exit.

Now generate the SSL certificate and private key in the appropriate locations (/etc/pki/tls/), with the following commands:

 cd /etc/pki/tls
 sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt


The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration. If you went with this option, skip option 2 and move on to Configure Logstash.


Option 2: FQDN (DNS or hosts file!)

If you have a DNS setup with your private networking, you should create an A record that contains the Logstash Server's private IP address—this domain name will be used in the next command, to generate the SSL certificate. Alternatively, you can use a record that points to the server's public IP address. Just be sure that your servers (the ones that you will be gathering logs from) will be able to resolve the domain name to your Logstash Server.

an easier way is to use your hosts file into each of your server that wend send logs to Logstash. on each of them, edit the file /etc/hosts and add your logstash ip with a name that you will use into the command below (didn't test this technics, but it should work).


Now generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/...), with the following command (substitute in the FQDN of the Logstash Server):

cd /etc/pki/tls; sudo openssl req -subj '/CN=logstash_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt


The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash but we will do that a little later. Let's complete our Logstash configuration.

 

4.2. Configure Logstash

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs. For a better readiness we will separate them in 3 differents files but you can put all of them in one if you prefer.

 

4.2.1 input configuration

Let's create a configuration file called 01-input.conf and set up our logstash input :

vim /etc/logstash/conf.d/01-input.conf

Insert the following input configuration: 

input {
  lumberjack {
    port => 5043
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

 

Save and quit. This specifies a lumberjack input that will listen on tcp port 5043, and it will use the SSL certificate and private key that we created earlier.

Lumberjack is the protocol used by logstash to communicate between logstash-forwarder instance (client side) and logstash (server side). You can also put a full logstash stack on each client but i prefer this lighter architecture. 

So our logstash server will listen to any other logstash-forwarder coming from client with our defined ssl_key for authentication (preventing rogue logstash-forwarder)

 

4.2.2. Filter configuration

Next we want to manage et filter our logs. Let's take a look into syslog logs and create a 10-filter-syslog.conf

 

filter {
        if [type] == "syslog" {
                grok {
                        match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
                        add_field => { "received_at" => "%{@timestamp}" "received_from" => "%{host}" }
                }
                syslog_pri { }
                date {
                        match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
                }
 
        }
}

Save and quit. This filter looks for logs that are labeled as "syslog" type (by a Logstash Forwarder), and it will try to use grok to parse incoming syslog logs to make it structured and query-able. Indeed logstash come with a lot of plugin to filter logs. Again i won't go into details here because it's a whole subject. but take a look into logstash documentation : logstash documentationfilter plugin

With this configuration, Logstash will also accept logs that do not match the filter, but the data will not be structured (e.g. unfiltered Nginx or Apache logs would appear as flat messages instead of categorizing messages by HTTP response codes, source IP addresses, served files, etc.). In other subject i will give you some other example.

If you want to add filters for other applications that use the Logstash Forwarder input, be sure to name the files so they sort between the input and the output configuration (i.e. between 01- and 30-). Indeed input must be loaded first, then the filter, and finally the output, and logstash load them alphabetically.

 i will soon write another article for fial2ban and openvpn logs and give a bit more details on logstash filter.

 

4.2.3. Output configuration

Lastly, we will create a configuration file called 30-output.conf:

vim /etc/logstash/conf.d/30-output.conf

Insert the following output configuration:

output {
	elasticsearch { host => localhost }
	stdout { codec => rubydebug }
}

 in this output we tell logstash to connect to the elasticsearch cluster on localhost to provide data to it. We also use stdout for debug purpose. the last line can be delete if you want later.

Restart Logstash to put our configuration changes into effect:

systemctl restart logstash.service
systemctl status logstash.service
# you can also probably use service logstash restart ... but on debian 8 i have to get used to systemd ...

 

 

 4.3 logstash debugging

I strongely advice you to check everything is in order because by experience, it NEVER work from the first time! ;)

 check that logstash is listening on port 5043 by using this command:

#netstat -lna | grep 5043
tcp        0      0 0.0.0.0:5043            0.0.0.0:*               LISTEN

if you have no line display, you have an issue. either way look also into the logstash log file like this :

tail /var/log/logstash/logstash.*

small tips to read log more easily, download ccze (coloring log tools) like this :

apt-get install ccze

and use it like this :

tail /var/log/logstash/logstash.* | ccze

 you can test your config file one by one like this:

 /opt/logstash/bin/logstash agent --config /etc/logstash/conf.d/mytestconfigfile.conf -t

 from my side and for some reason beyond my control, my config files had some issues with spacing character, and special character (like {,",=, > etc). try to debug one step at a time and try to remove and rewrite them if you try a copy/paste.

 

 5. Logstash forwarder (client shipping side)

If someone read until now, please let me a comment to let me know i didn't write all of it just for me ! (even if that's one of the purpose ! :P )

you will have to repeat this step on every client you want to send log from:

- copy the certificate we created previously onto each client: /etc/pki/tls/certs/logstash-forwarder.crt

 - add logstashforwarder repository to your package manager :

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | apt-key add -
echo 'deb http://packages.elastic.co/logstashforwarder/debian stable main' | tee /etc/apt/sources.list.d/logstashforwarder.list

 

here is a simple example of my logstash-forwarder for syslog using my certificate:

cat /etc/logstash-forwarder.conf
{
  # The network section covers network configuration :)
  "network": {
    "servers": [ "myservername_WITH_THE_SAME_NAME_AS_IN_MY_CERTIFICATE:5043" ],
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",
    "timeout": 15
  },
 
  # The list of files configurations
  "files": [
    {
        "paths": [ "/var/log/messages","/var/log/auth.log" ],
        "fields": { "type": "syslog" }
#    },
#    {
#            "paths": [ "/var/log/fail2ban.log" ],
#            "fields": { "type": "fail2ban" }
    }
  ]
}

be careful about the server name as said into logstash-forwarder github notes:

IMPORTANT TLS/SSL CERTIFICATE NOTES

This program will reject SSL/TLS certificates which have a subject which does not match the serversvalue, for any given connection. For example, if you have "servers": [ "foobar:12345" ] then the 'foobar' server MUST use a certificate with subject or subject-alternative that includes CN=foobar. Wildcards are supported also for things like CN=*.example.com. If you use an IP address, such as"servers": [ "1.2.3.4:12345" ], your ssl certificate MUST use an IP SAN with value "1.2.3.4". If you do not, the TLS handshake will FAIL and the lumberjack connection will close due to trust problems.

 

take a look into previous step if you don't understand the statement

Now everything left to do is starting everything : elasticsearch, kibana, nginx, logstash then logstash-forwarder. In case of trouble think about looking into logs (ccze is very useful for colorizing) :

#server side
tail -f /var/log/elasticsearch/*.log | ccze
tail -f /var/log/nginx/*.log | ccze
tail -f /var/log/kibana/*.log | ccze
tail -f /var/log/logstash/*.log | ccze
 
#client side
tail -f /var/log/elasticsearch/*.log | ccze

keep in mind that logstash-forwarder will keep an index of where it stopped reading a log file, the last time it stopped. if you want to delete this index, you can stop logstash-forwarder and delete the file :

 rm -rf /var/lib/logstash-forwarder/.logstash-forwarder

i will give you more tips on this kind of manipulation if needed like delete index in elasticsearch & Co.

 

6. Conclusion

Finally now everything should be working fine and you just have to create a simple dashboard into kibana. to do so, connect to your kibana into your browser using the ip adress of your nginx reverse proxy, type in your login/password previously set, then select the default index (logstash-*) and click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events.

Kibana usage will be describe in another topic, or in comments if needed or you could look into elastic.co, there is lot of details about it too.

 

Enjoy !

 

sources

https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html

https://blog.projectnine.com/fail2ban-with-elk/

https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-4-on-ubuntu-14-04

 https://www.elastic.co

https://home.regit.org/2015/04/elasticsearch-systemd-and-debian-jessie/

 https://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html

https://www.digitalocean.com/community/tutorials/how-to-manually-install-oracle-java-on-a-debian-or-ubuntu-vps

 

 

Add comment


Security code
Refresh

Go to Top
Template by JoomlaShine