In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to use ELK to build Docker containerization application log center, the article is very detailed, has a certain reference value, interested friends must read it!
Overview
Once the application is containerized, what needs to be considered is how to collect the printed logs of the application located in the Docker container for operation and maintenance analysis. A typical example is log collection for SpringBoot applications. This article will explain how to use the ELK log center to collect logs generated by containerized applications, and query and analyze the logs in a visual way, as shown in the following figure:
Architecture diagram
Mirror preparation
Mirror preparation
ElasticSearch Mirror
Logstash Mirror
Kibana Mirror
Nginx image (as a containerized application to produce logs)
Enable the Rsyslog service of Linux system
Modify the Rsyslog service profile:
Vim / etc/rsyslog.conf
Enable the following three parameters:
$ModLoad imtcp$InputTCPServerRun 514. * @ @ localhost:4560
Enable 3 parameters
The intention is simple: let Rsyslog load the imtcp module and listen on port 514, and then forward the data collected in Rsyslog to local port 4560!
Then restart the Rsyslog service:
Systemctl restart rsyslog
View the rsyslog startup status:
Netstat-tnl
Deploy ElasticSearch services
Docker run-d-p 9200 9200\-v ~ / elasticsearch/data:/usr/share/elasticsearch/data\-- name elasticsearch elasticsearch
Successful startup effect of ES
Deploy Logstash services
Add the ~ / logstash/logstash.conf configuration file as follows:
Input {syslog {type = > "rsyslog" port = > 4560}} output {elasticsearch {hosts = > ["elasticsearch:9200"]}}
In the configuration, we asked Logstash to extract the application log data from the local Rsyslog service and forward it to the ElasticSearch database!
After the configuration is complete, you can start the Logstash container with the following command:
Docker run-d-p 4560 link elasticsearch:elasticsearch 4560 link elasticsearch:elasticsearch\-v ~ / logstash/logstash.conf:/etc/logstash.conf\-name logstash logstash\ logstash-f / etc/logstash.conf
Deploy Kibana services
Docker run-d-p 5601 ELASTICSEARCH_URL= 5601\-link elasticsearch:elasticsearch\-e ELASTICSEARCH_URL= http://elasticsearch:9200\-- name kibana kibana
Start the nginx container to produce logs
Docker run-d-p 90:80-log-driver syslog--log-opt\ syslog-address=tcp://localhost:514\-log-opt tag= "nginx"-name nginx nginx
It is obvious that the Nginx application log in the Docker container is forwarded to the local syslog service, which then transfers the data to Logstash for collection.
At this point, the log center has been built, and a total of four containers are working:
Experimental verification
The browser opens localhost:90 to open the Nginx interface, and refreshes it several times, allowing the background to generate logs of GET requests.
Open the Kibana visual interface: localhost:5601
Localhost:5601
Collect Nginx application logs
Collect Nginx application logs
Query application log
Enter program=nginx in the query box to query a specific log
Query application log
The above is all the contents of the article "how to use ELK to build a Docker containerized Application Log Center". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.