Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy ELK log analysis system based on Docker container

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how to deploy ELK log analysis system based on Docker container. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.

Deploying the ELK log analysis system consumes computer hardware. If you use a virtual machine for test deployment, it is recommended to allocate more hardware resources, otherwise, when the elk container is running, it will not be able to run normally. I will allocate 5 gigabytes of memory and four CPU to the docker host.

I. Environmental preparation

I use a docker host here (if you need to deploy docker services, please refer to the blog post: detailed installation configuration of Docker), whose IP address is 192.168.20.6, on which you run the elk container.

2. Configure the docker host to run the elk container [root@docker01 ~] # echo "vm.max_map_count = 655360" > > / etc/sysctl.conf # change its virtual memory [root@docker01 ~] # sysctl-p # refresh kernel parameter vm.max_map_count = 655360 # if the container does not work properly, you can appropriately increase this parameter value [root@docker01 ~] # docker pull sebp/elk # elk image size above 2G Therefore, it is recommended to download it to the local first. Then run container [root@docker01 ~] # docker run-itd-p 5601 ES_HEAP_SIZE= 9200-p 5044 ES_HEAP_SIZE= "3G"-e LS_HEAP_SIZE= "1g"-name elk sebp/elk# runs elk container # "- e ES_HEAP_SIZE=" 3G ": is to limit the amount of memory used by elasticsearch #-e LS_HEAP_SIZE=" 1g ": limit the amount of memory used by logstash

At this point, you can access port 5601 of the docker host through a browser to access the following interface (all the following operations can be done by looking at the figure, which has been marked on the figure):

When you see the following page (pay attention to selecting the RPM tab), follow the prompts below to execute on our docker host.

Execute the prompt command action on the above page, as follows:

[root@docker01 ~] # curl-L-O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.4.0-x86_64.rpm# download rpm package [root@docker01 ~] # rpm-ivh filebeat-7.4.0-x86_64.rpm # install and download rpm package [root@docker01 ~] # vim / etc/filebeat/filebeat.yml= Filebeat inputs = filebeat.inputs: # modify the content under filebeat.inputs enabled: true # change to true In order to enable filebeat paths: # modify the path paragraph, add the log path to be collected-/ var/log/messages # specify the file path of the system log-/ var/lib/docker/containers/*/*.log # this path is the log path for all containers = Kibana = host: "192.168.20.6 var/lib/docker/containers/*/*.log 5601" # remove the comment symbol for this line And fill in the listening port and address of kibana-Elasticsearch output-hosts: ["192.168.20.6 Elasticsearc 9200"] # modify to the listening address and port of Elasticsearc # after modifying the above configuration, save and exit [root@docker01 ~] # filebeat modules enable elasticsearch # enable elasticsearch module [root@docker01 ~] # filebeat setup # initialize filebeat Wait a little longer for Index setup finished.Loading dashboards (Kibana must be running and reachable) Loaded dashboardsLoaded machine learning job configurationsLoaded Ingest pipelines# to have the above message before initialization is successful [root@docker01 ~] # service filebeat start # start filebeat

After performing the above operations, you can click "Dicover" below to view the log, as follows:

If there are new logs in the last 15 minutes, and the time of the docker host is synchronized, consider executing the following command to restart the elk container and check it again.

[root@docker01 ~] # systemctl daemon-reload # reload the configuration file [root@docker01 ~] # docker restart elk # restart the elk container

When the above page can be accessed normally, we run a container to output a character every 10 seconds, and then check whether kibana can collect the log information related to the container, as follows:

[root@docker01 ~] # docker run busybox sh-c 'while true;do echo "this is a log message from container busybox!"; sleep 10 runs this container, outputting a segment of characters every ten seconds

Then look at the following figure and do it in turn, as follows:

If you can see the corresponding log information, the elk container is working properly.

This is the end of the article on "how to deploy ELK log analysis system based on Docker container". I hope the above content can be helpful to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report