In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to use fluentd as a docker log driver to collect logs". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to use fluentd as a docker log driver to collect logs".
Preface
Docker default log driver is json-file, each container will generate a / var/lib/docker/containers/containerID/containerID-json.log locally, and the log driver supports extension. This chapter mainly explains that the Fluentd driver collects docker logs.
Fluentd is an open source data collector for unified logging layer. It is the sixth CNCF graduation project after Kubernetes, Prometheus, Envoy, CoreDNS and containerd. It is often used to compare elastic's logstash. Relatively speaking, fluentd is more lightweight and flexible. Now the development is very fast and the community is very active. When writing this blog, github's star is 8.8k fork is 1k.
Premise
Docker
Understand fluentd configuration
Docker-compose
Prepare the configuration file
Docker-compose.yml
Version: '3.7'x-logging: & default-logging driver: fluentd options: fluentd-address: localhost:24224 fluentd-async-connect:' true' mode: non-blocking max-buffer-size: 4m tag: "kafeidou. {{.Name}" # configure the tag of the container to kafeidou. For the prefix and the suffix for the container name, docker-compose adds a copy suffix to the container, such as fluentd_1services: fluentd: image: fluent/fluentd:v1.3.2 ports:-24224 image 24224 volumes: -. /: / fluentd/etc-/ var/log/fluentd:/var/log/fluentd environment:-FLUENTD_CONF=fluentd.conf fluentd-worker: image: fluent/fluentd:v1.3.2 depends_on:-fluentd logging: * default-logging
Fluentd.conf
@ type forward port 24224 bind 0.0.0.0 @ type file path / var/log/fluentd/kafeidou/$ {tag [1]} append true @ type single_value message_key log @ type file timekey 1d timekey_wait 10m flush_mode interval flush_interval 5s @ type file path / var/log/fluentd/$ {tag} append true @ type single_value message_key log @ type file timekey 1d timekey_wait 10m flush_mode interval flush_interval 5s
Because fluentd needs to have write permission in the configured directory, you need to prepare the directory to store log and give it permission.
Create a directory
Mkdir / var/log/fluentd
Give permission, here for experimental demonstration, directly authorize 777
Chmod-R 777 / var/log/fluentd
Execute the command in the directories of docker-compose.yml and fluentd.conf:
Docker-compose up-d
[root@master log] # docker-compose up-dWARNING: The Docker Engine you're using is running in swarm mode.Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.To deploy your application across the swarm, use `docker stack deploy`.Creating network "log_default" with the default driverCreating fluentd... DoneCreating fluentd-worker... Done
Check the log directory and you should have the corresponding container log file:
[root@master log] # ls / var/log/fluentd/kafeidou/fluentd-worker.20200215.log ${tag [1]}
This is the result of my last experiment, which will create a ${tag [1]} directory, which is strange, and there will be two more files in this directory.
[root@master log] # ls / var/log/fluentd/kafeidou/\ $\ {tag\ [1\]\} / buffer.b59ea0804f0c1f8b6206cf76aacf52fb0.log buffer.b59ea0804f0c1f8b6206cf76aacf52fb0.log.meta
If you understand this area, you are welcome to communicate with us!
Why not use docker's original log for architecture summary?
Let's first take a look at how the original docker log is structured:
Docker generates a log file for each container in the local / var/lib/docker/containers/containerID/containerID-json.log path to store docker logs.
There are a total of 7 containers in the image above. As seven micro-services, it is very inconvenient to view logs when you need to view logs. In the worst case, you need to view the logs of each container on three machines.
What's the difference after using fluentd?
Containers can be summarized together after collecting docker logs using fluentd. Take a look at the architecture after configuring the fluentd configuration file for this article:
Since fluentd is configured to be stored in the local directory of the fluentd machine, the effect is to collect the container logs of other machines into the local directory of the fluentd machine.
Can fluentd only collect container logs locally?
Fluentd can actually transfer the collected logs again, for example, to storage software such as elasticsearch:
Fluentd flexibility
There are many things that fluentd can do. Fluentd itself can act as a transmission node as well as a receiving node. It can also filter specific logs, format specific logs, and transfer matching specific logs again. This is just a simple effect of collecting docker container logs.
Thank you for reading, the above is the content of "how to use fluentd as a docker log driver to collect logs". After the study of this article, I believe you have a deeper understanding of how to use fluentd as a docker log driver to collect logs, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.