Prerequisites
This tutorial comprises hands-on demonstrations. To follow along, ensure you have the following:
- A Linux host – This example uses the Debian 11 Bullseye server with a memory capacity of 6GB.
- A Docker CE (Community Edition) and Docker Compose installed on your Linux host.
Setting up an EFK Stack Project
EFK Stack is an enterprise-ready log aggregation and logs analysis framework for bare-metal and container infrastructure. But before deploying an EFK stack, you’ll first set up a project directory and create a Docker configuration for deploying EFK Stack on your Docker host.
For this example, you’ll use Docker images with the following specs:
- Elasticsearch 7.17.0 – Capable of storing data with fast lightning Apache Lucene-based search capabilities
- Kibana 7.17.0 – Open-source data aggregation and collector that supports JSON data, and
- Fluentd Custom image based on v1.14.1 – Data visualization software for Elasticsearch.
To set up your EFK stack project:
1. Open a terminal and log in to your server.
2. Run the below commands to verify both the Docker and Docker Compose are installed on your system.
Checking Docker and Docker Compose Version
3. Run the following command to create a new project directory (mkdir
) and set it as the working directory (cd
).
You can name the directory as you prefer, but in this tutorial, the directory is named efk. This directory will store all of the EFK Stack configuration files in this tutorial.
The configuration below uses the Docker Compose script v3 and defines all EFK stack containers.
This configuration creates the fluentd custom image containing the elasticsearch client driver and the fluentd-plugin-elasticsearch.
Ensure to use the same version between elasticsearch and elasticsearch client driver — this tutorial uses version 7.17.0.
Below is the starter script for the fluentd container service, which executes the basic command fluentd –config /fluentd/etc/fluentd.conf –plugin /etc/fluentd/plugins.
This configuration allows the fluentd container service to receive log messages, and forward them to the elasticsearch container service.
If you don’t have the tree command, install it using the following command: apt install tree -y
Deploying EFK Stack with Docker
You’ve now created all configuration files for deploying EFK Stack using Docker and Docker Compose. The next step is to deploy the EFK Stack using the docker-compose
command, and the deployment will happen in your project directory (~/efk).
1. First, run the below command to change the working directory to the efk
project directory.
This command automatically downloads Docker images Elasticsearch and Kibana. And the Fluentd Docker image automatically builds using the Dockerfile in the fluentd directory.
Deployment may take some time, depending on the specs of the Docker host.
And below is the screenshot showing the deployment is complete, and the Kibana container service is running.
3. Run each command below to check logs of the EFK stack build process. Always run these commands whenever you get an error in the deployment process.
And below is the log for the kibana container.
4. Now, run the below command to check all container services’ status (ps
).
5. Additionally, run the below command to verify the elasticsearch container service. This command prints the detailed settings of the efk_elasticsearch_1
container.
6. Lastly, run the below command to access and verify the elasticsearch container by IP address (172.18.0.2
). Port 9200
is the default port for the elasticsearch container.
Configuring Kibana Index Pattern
Now that you’ve completed deploying the EFK Stack in the Docker environment, you’ll open Kibana from your web browser. You’ll set up an index pattern for log monitoring and analysis.
1. Open your favorite web browser and navigate to the server IP address followed by the Kibana service port 5601 (i.e., http://172.16.1.10:5601).
2. Next, click the Explore on my own button on the welcome page below.
3. Click the Stack Management option to set up the Kibana index pattern in the Management section.
4. On the Kibana left menu section, click menu Index Patterns and click the Create Index Pattern button to create a new index pattern.
5. Now, input the index pattern Name as fluentd-*, set the Timestamp field to @timestamp, and click the Create index pattern button to confirm the index pattern settings.
On the right side, you can see available index patterns from the fluentd such as fluentd-%Y%m%d. The %Y%m%d date format is based on the fluentd configuration (fluentd.conf).
6. Lastly, click on the top left menu (ellipsis), then click the Discover menu to show the logs monitoring.
Below is the screenshot of the Kibana log monitoring and analysis dashboard. All listed logs are taken from the Elasticsearch and shipped by the Fluentd log aggregation.
Running a Docker Container with Fluentd Log Driver
After configuring the Kibana index pattern, you’ll run a Docker container with Fluentd log drive, automatically sending logs to the EFK stack.
1. Run the below command to download the NGINX image. The alpine
version is smaller than normal images based on Ubuntu, CentOS, or Fedora.
2. Next, run the below command to start a new NGINX container (nginx_container
) in detached mode (-d
).
The command also sets the log drive to Fluentd (–log-driver=fluentd) and exposes port 8080 on the Docker host machine for the container (nginx_container).
3. After running the container, run the docker
command below to check all running containers.
4. Now, run the below command to access the nginx_container
and generate access logs.
Alternatively, open a new tab on your web browser and type the server IP address followed by port 8080 (i.e., http://172.168.1.10:8080).
If all goes well, you’ll see the default index.html page from the nginx_container.
5. Lastly, switch back to the Kibana dashboard, and click the Discover menu on the left side.
Click the container_name : nginx_container query on the KQL (Kibana Query Language) field, and you’ll see logs from the nginx_container, as shown below.
Conclusion
You’ve learned how to deploy EFK Stack (Elasticsearch, Fluentd, and Kibana) throughout this tutorial for log monitoring and analysis using Docker. You’ve also learned how to set up logging for the Docker container using the Fluentd log driver. And at this point, you now have a fully functional log monitoring for applications and services.
For the next stage, you may be interested in using KQL (Kibana Query Language) to visualize log monitoring and analysis.
Đăng ký liền tay Nhận Ngay Bài Mới
Subscribe ngay
Cám ơn bạn đã đăng ký !
Lỗi đăng ký !
Add Comment