In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
How to summarize the log collection of K8s cluster, many novices are not very clear about this. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.
The log collection method provided by K8s is introduced below, and the Fluentd log collector is introduced and compared with other products. Finally, it introduces how to transform k8s and use ZeroMQ to transfer logs to a unified log processing center in the form of messages.
Existence form of container log
Currently, container logs can be output in two ways:
Stdout,stderr standard output
For this form of log output, we can view the logs directly using docker logs, and the same cluster in K8s cluster can view logs in a form similar to kubectl logs.
Log file record
This kind of log output cannot be viewed from the above method, but can only be viewed by the tail log file.
In the k8s official document, if we want to collect and analyze logs for the above two forms of logs, we officially recommend the following two countermeasures: for the first document:
When a cluster is created, the standard output and standard error output of each container can be ingested using a Fluentd agent running on each node into either Google Cloud Logging or into Elasticsearch and viewed with Kibana.
When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called fluentd-cloud-logging on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.
Let's just say that when the cluster starts up, it starts a Fluentd agent to collect logs on each machine and sends them to Elasticsearch.
This is done by each agent mount directory / var/lib/docker/containers scans each container log file using fluentd's tail plug-in and sends it directly to Elasticsearch.
For the second type:
A second container, using the gcr.io/google_containers/fluentd-sidecar-es:1.2 image to send the logs to Elasticsearch. We recommend attaching resource constraints of 100m CPU and 200Mi memory to this container, as in the example.
A volume for the two containers to share. The emptyDir volume type is a good choice for this because we only want the volume to exist for the lifetime of the pod.
Mount paths for the volume in each container. In your primary container, this should be the path that the applications log files are written to. In the secondary container, this can be just about anything, so we put it under / mnt/log to keep it out of the way of the rest of the filesystem.
The FILES_TO_COLLECT environment variable in the sidecar container, telling it which files to collect logs from. These paths should always be in the mounted volume.
In fact, it is similar to the first one, except that the Fluentd agent is shared in the same pod of the business, and then the log files are collected and sent to Elasticsearch.
Fluentd analysis
The official definition of fluentd is:
Unified log layer
Fluentd decouples data sources from back-end systems by providing a unified logging layer between back-end systems. This layer allows developers and data analysts to use multiple types of logs when generating logs. A unified logging layer allows you and your organization to make better use of data and iterate faster on your software. In other words, fluentd is a log collector for multiple data sources and multiple data exits. In addition, it comes with the function of log forwarding.
Characteristics of fluentd
Deployment is simple and flexible
Open source
Proven reliability and performance
Community support, more plug-ins
Use json format event format
Pluggable architecture design
Low resource requirements
Built-in high reliability
Fluentd and Logstash
Use a picture to compare the two log collection tools. For a comparison of their two projects, please refer to:
Fluentd vs. Logstash: A Comparison of Log Collectors
Fluentd and zeroMQ
It is not appropriate to put these two products together because they belong to different camps and accomplish different functional requirements. Because fluentd has the function of message forwarding, let's explain the relationship between zeroMQ and message middleware such as zeroMQ: in large system architecture, zeroMQ is used to do a lot of log forwarding work. There are two projects in fluentd that complete the task of transit routing of logs: written by golang: fluentd-forwarder and fluent-bit written by c
So does that mean you need to make a choice? Actually this is not so. Focus on the definition of fluentd and zeroMQ. In fact, they are a cooperative relationship. If you are a large architectural system, the log volume is huge. I recommend that you use fluentd for log collection and use zeroMQ as the exit of fluentd. That is to say, fluentd completes unified collection and zeroMQ completes log transfer. If your system is not huge, you don't need zeroMQ to transmit it.
So you don't need to pay attention to the performance comparison between the two products. Although they all have high-performance definitions.
Performance Test results of zeroMQ: performance comparison between zeroMQ and JeroMQ
Container log collection summary
As described above, you can use a unified log collector to collect logs regardless of how your business container logs are rendered. The following introduction is recommended for log phones in three cases:
K8s cluster
In this way, the official solution has been mentioned above, you only need to install this solution deployment.
Docker swarm cluster
Docker swarm currently does not provide a log viewing mechanism. However, docker cloud provides a mechanism similar to kubectrl logs to view stdout logs. Currently, there is no fluentd plug-in to collect logs directly from the service. For the time being, consider using the same mechanism as the container to collect logs directly. Docker service create support-log-driver
Self-deployed docker container
Fluentd log driver is built in from docker1.8. Start the container as follows, and the container stdout/stderr log will be sent to the configured fluentd. If configured, docker logs will not be available. In addition, in the default mode, if your configured address does not have a normal service, the container cannot be started. You can also start using fluentd-async-connect, while docker daemon can try to connect and cache logs in the background.
Docker run-log-driver=fluentd-log-opt fluentd-address=myhost.local:24224
Similarly, if it is a log file, expose the file and collect it directly using fluentd.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.