In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article focuses on "how to use the open source tool fluentd-pilot to collect container logs". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let the editor take you to learn how to use the open source tool fluentd-pilot to collect container logs.
Introduction to fluentd-pilot
Fluentd-pilot is Ali's open source docker log collection tool, Github project address. You can deploy a fluentd-pilot instance on each machine and collect all the Docker application logs on the machine.
Fluentd-pilot has the following characteristics:
A separate fluentd process collects logs for all containers on the machine. There is no need to start a fluentd process for each container.
File log and stdout are supported. Docker log dirver or logspout can only handle stdout,fluentd-pilot. You can collect not only stdout logs, but also file logs.
Declarative configuration. When your container has logs to collect, fluentd-pilot will automatically collect logs for the new container as long as the path of the log file to be collected is declared through label without any other configuration changes.
Multiple log storage methods are supported. Whether it is the powerful Ali Cloud log service, the popular elasticsearch combination, or even graylog,fluentd-pilot, you can deliver logs to the right location.
Rancher uses fluentd-pilot to collect logs
Since we are going to use fluentd-pilot, we have to start it first. There is also a log system, logs should be collected centrally, and there must be an intermediate service to collect and store, so this kind of thing must be prepared first. What should we do in Rancher? As shown in the figure, first we select Elasticsearch and Kibana in Rancher's app store. The version is not required, so Elasticsearch3.X and Kibana4 are used below.
Secondly, when you deploy a fluentd-pilot container on the RancherAgent host, and then start it in the container, we need to declare the log information of the container, and fluentd-pilot will automatically perceive the configuration of all containers. Every time you start the container or delete the container, it can see that when it sees that a new container is created, it will automatically generate a corresponding configuration file for the new container according to your configuration, and then collect it. The final collected logs will also be sent to the backend storage according to the configuration, which mainly refers to systems such as elasticsearch or SLS. Next you can use some tools on this system to query and so on.
According to the actual situation, the host label can be defined in each Agent, and a pilot container can be run on each RancherAgent host through the host tag. Using this command to deploy, in fact, it is now a standard Docker image and supports some back-end storage internally. You can specify where to put the logs through environment variables. This configuration will send all collected logs to elasticsearch. Of course, two mounts are needed, because it connects to Docker and senses the changes of all containers in Docker. It needs to access some information of the host in this way. In the Rancher environment, use the following docker-compose.yml application-> add an application, and add the following content to the optional docker-compose.yml.
Version: '2'services: pilot: image: registry.cn-hangzhou.aliyuncs.com/acs-sample/fluentd-pilot:0.1 environment: ELASTICSEARCH_HOST: elasticsearch ELASTICSEARCH_PORT:' 9200' FLUENTD_OUTPUT: elasticsearch external_links:-es-cluster/es-master:elasticsearch volumes:-/ var/run/docker.sock:/var/run/docker.sock-/: / host labels: aliyun.global: 'true'
After configuration, start your own application (example: tomcat). Let's look at the logs to be collected on the app. What kind of declaration should I make on it? There are two key configurations, one is label catalina, which declares why the logs of the container are in a format (standard format, etc., or files). ), all names are fine; the second is to declare access, which is also a name, and you can use any name you like. The address of such a path, when you start the fluentd-pilot container through this configuration, it can feel the startup event of such a container. It will look at the configuration of the container, collect the file logs under this directory, and then tell fluentd-pilot to go to the central configuration and collect it. There needs to be another volume, which is actually the same as the Logs directory. There is actually no general way to get the files in the container outside the container, so we actively mount the directory from the host so that we can see everything under the directory on the host.
When you deploy, he will create his own index in elasticsearch, and you can see that two things will be generated on elasticsearch's kopf, both of which will be created automatically, regardless of some configuration, what is the only thing you need to do? You can create a log index pattern on kibana. Then go to the log search interface, you can see where it came from, what the content of this log is, this information has appeared very quickly.
Lable description
When we start tomcat, we declare the following two, telling fluentd-pilot the log location of the container.
Aliyun.logs.tomcat1-access / opt/apache-tomcat-8.0.14/logs/localhost_access_log.*.txt aliyun.logs.catalina stdout
You can also add more tags to the application container
Aliyun.logs.$name = $path
The variable name is the log name, which means whatever it is, as long as you like. It can only contain 0-9a Muzam ZZ and-
The variable path is the log path to be collected and must be specific to the file, not just the directory. Wildcards can be used in the file name section. / var/log/he.log and / var/log/*.log are both correct values, but / var/log cannot be written only to the directory. Stdout is a special value that represents standard output
Aliyun.logs.$name.format, log format, currently supported
None unformatted plain text
Json: json format with a complete json string per line
Csv: csv format
Aliyun.logs.$name.tags: when reporting logs, additional fields are added in the format of K1 key-value v1 and K2 fields v2 separated by commas, for example
Aliyun.logs.access.tags= "name=hello,stage=test". The name field and stage field will appear when you report to the stored log.
If elasticsearch is used as log storage, the tag of target has a special meaning, indicating the corresponding index in elasticsearch.
At this point, I believe you have a deeper understanding of "how to use the open source tool fluentd-pilot to collect container logs". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.