Linux

How to install Logstash Logfile Analytics Software on Ubuntu 20.04

How to install Logstash Logfile Analytics Software on Ubuntu 20.04

ELK is a combination of three open-source products ElasticSearch, Logstash and Kibana. This is one of the most popular log management platforms in the whole world. Elasticsearch is a search and analysis engine. Logstash is a log processing pipeline that transports logs from various sources simultaneously, transforms them, and then sends them to “deposits” like Elasticsearch. Kibana is used to visualize your data that has been indexed by Logstash to the Elasticsearch index

In this tutorial we will explain how to install Logstash on Ubuntu 20.04.

Precondition

  • Server running Ubuntu 20.04.
  • The root password is configured on the server.

Install Required Dependencies

To install Elasticsearch, you need to install Java on your system. You can install Java JDK with the following command:

apt-get install openjdk-11-jdk -y

Once installed, verify the java version that is installed with the following command:

java -version

You will see the following output:

openjdk 11.0.7 2020-04-14
OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)
OpenJDK 64-Bit Server VM (build 11.0.7+10-post-Ubuntu-3ubuntu1, mixed mode, sharing)

Next, install other dependencies needed by running the following command:

apt-get install nginx curl gnupg2 wget -y

After all dependencies are installed, you can continue to the next step.

Install and configure Elasticsearch

before starting, you must install Elasticsearch on your system. It stores logs and events from Logstash and offers the ability to search logs in real time.

First, add the Elastic repository to your system with the following command:

curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-7.x.list

Next, update the repository and install Elasticsearch with the following command:

apt-get update -y
apt-get install elasticsearch -y

Once installed, edit the Elasticsearch default configuration file:

nano /etc/elasticsearch/elasticsearch.yml

Cancel comments and change the value as shown below:

network.host: localhost

Save and close the file then start the Elasticsearch service and enable it to start at boot with the following command:

systemctl start elasticsearch
systemctl enable elasticsearch

At this point, Elasticsearch is installed and listening on port 9200. You can now test whether Elasticsearch is functioning or not by running the following command:

curl -X GET "localhost:9200"

If everything is fine, you will see the following output:

{
  "name" : "ubuntu2004",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "AVRzLjAbQTK-ayYQc0GaMA",
  "version" : {
    "number" : "7.8.0",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "757314695644ea9a1dc2fecd26d1a43856725e65",
    "build_date" : "2020-06-14T19:35:50.234439Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

After completion, you can continue to the next step.

Install and Configure Kibana

Next, you need to install Kibana on your system. Kibana allows you to analyze data stored in Elasticsearch. You can install it by simply running the following command:

apt-get install kibana -y

After Kibana is installed, start the Kibana service and activate it to start at boot with the following command:

systemctl start kibana
systemctl enable kibana

Next, you must create an administrative user for Kibana to access the Kibana web interface. Run the following command to create a Kibana administrative user and password, and save them in the htpasswd.users file.

echo "admin:`openssl passwd -apr1`" | tee -a /etc/nginx/htpasswd.users

You will be asked to provide a password as shown in the following output:

Password: 
Verifying - Password: 
admin:$apr1$8d05.YO1$E0Q8QjfNxxxPtD.unmDs7/

Next, create a Nginx virtual host configuration file to serve Kibana:

nano /etc/nginx/sites-available/kibana

Add the following lines:

server {
    listen 80;

    server_name kibana.example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Save and close the file then activate the Nginx virtual host file with the following command:

ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/

Next, restart the Nginx service to apply changes:

systemctl restart nginx

Next, open your web browser and check Kibana status using the URL http://kibana.example.com/status. You will be asked to provide a username and password as shown below:

1(1)

Provide your Kibana username and password, and click the Enter button. You will see the following screen:

2(2)

At this point, the Kibana dashboard is installed on your system. You can now proceed to the next step.

Install and configure logstash

Logstash is used to process logs sent by beats. You can install it by running the following command:

apt-get install logstash -y

After Logstash is installed, create a new beat configuration file with the following command:

nano /etc/logstash/conf.d/02-beats-input.conf

Add the following lines:

input {
  beats {
    port => 5044
  }
}

Save and close the file then create an Elasticsearch configuration file with the following command:

nano /etc/logstash/conf.d/30-elasticsearch-output.conf

Add the following lines:

output {
  if [@metadata][pipeline] {
    elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    pipeline => "%{[@metadata][pipeline]}"
    }
  } else {
    elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    }
  }
}

Save and close the file then verify your Logstash configuration with this command:

sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t

If everything is fine, you should see the following output:

Config Validation Result: OK. Exiting Logstash

Next, start the Logstash service and enable it to start at boot with the following command:

systemctl start logstash
systemctl enable logstash

At this point, Logstash is installed in your system. You can now proceed to the next step.

Install and Configure Filebeat

ELK stack uses Filebeat to collect data from various sources and transport them to Logstash.

You can install Filebeat with the following command:

apt-get install filebeat -y

Once installed, you will need to configure Filebeat to connect to Logstash. You can configure it with the following command:

nano /etc/filebeat/filebeat.yml

Comment out the following lines:

#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

Then, uncomment the following lines:

output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

Save and close the file then enable the system module with the following command:

filebeat modules enable system

By default, Filebeat is configured to use default paths for the syslog and authorization logs.

You can load the ingest pipeline for the system module with the following command:

filebeat setup --pipelines --modules system

Next, load the template with the following command:

filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'

You should see the following output:

Index setup finished.

By default, Filebeat comes packaged with sample Kibana dashboards that allow you to visualize Filebeat data in Kibana. So you need to disable the Logstash output and enable Elasticsearch output. You can do it with the following command:

filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601

You should see the following output:

Overwriting ILM policy is disabled. Set `setup.ilm.overwrite:true` for enabling.

Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/elastic-stack-overview/current/xpack-ml.html
Loaded machine learning job configurations
Loaded Ingest pipelines

Now, start the Filebeat service and enable it to start at boot with the following command:

systemctl start filebeat
systemctl enable filebeat

Access Kibana Dashboard

At this point, all ELK components are installed and configured. Now, open your web browser and type URL http://kibana.example.com. You will see the Kibana dashboard in the following screen:

3(1)

In the left panel, click on Find and select the predefined filebeat-* pattern to see Filebeat data in the following screen:

4(1)

Now, Kibana offers many features. Feel free to explore it as you wish.

Conclusion

Congratulations! You have successfully installed and configured Logstash on the Ubuntu 20.04 server. You can now collect and analyze system logs from a central location. Feel free to ask me if you have questions.

Related posts

How to Install the GitScrum Agile Project Management Tool on Ubuntu 20.04 LTS

Linux

How to install Linux Kernel 5.7 on Ubuntu

Linux

How to Install Elasticsearch, Logstash, and Kibana (ELK Stack) on Ubuntu 20.04

Linux

How to Set Up WireGuard VPN Server and Client on Ubuntu 20.04

Linux

How To Install NTP (Chrony) On CentOS 8 / CentOS 7 & RHEL 8 / RHEL 7

Linux

How to Install R on Ubuntu 20.04

Linux

How to properly secure sysctl on Linux

Linux

How to Install Python on a Mac

Apple

How to Install Android Studio on Ubuntu 20.04

Linux