In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you about the Kubernetes log about what knowledge, I believe that most people do not know much, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
About log # 1 a variety of logs
The application has logs to troubleshoot problems. The cluster has logs to troubleshoot problems. The container environment has a logging mechanism, and containerized applications should write logs to standard output and standard error.
# 2 but there are problems with these logging mechanisms
But the logging capabilities provided by the container engine are far from enough: container crashes, eviction of POD instances, node crashes, in which case we still want to access logs. Therefore, logs need to be stored independently, and the life cycle has nothing to do with nodes, containers, and so on.
This concept is called "cluster log".
# 3 Cluster Log
"Cluster logs" need to be stored separately, but Kubernetes does not provide back-end storage for logs, so we need to integrate them ourselves. This article combines the official "Logging Architecture" documents to sort out the contents related to the log.
Write to the log of standard output
Use kubectl logs to view the log and add the-- previous option to view the log of the crashed container.
If you have a container in the POD instance, you can specify the container name to view the specific container log.
Node-level log type application log
The log written by the container application to standard output and standard error will be redirected by the container engine. For example, in Docker, it is log-driven (in Kubernetes, it is configured to write to a file in JSON format).
Note that multiline logs cannot be processed using the Docker log driver and need to be processed in the log collection tool.
If the container is restarted, kubelet saves a single container and its logs. If the POD instance is expelled, all corresponding containers will be expelled, including logs.
In addition, the node log should also consider the rotation problem to prevent the log from consuming too many disks. However, Kubernetes is not currently responsible for log rotation, and this problem should be handled by the container application. You can also configure the container environment to handle log rotation, such as using the-log-opt option of Docker.
When the kubectl logs is executed, the log file is read directly by the kubelet of the corresponding node. Note that if the external system performs rotation and the log is truncated to multiple files, kubectl logs can only read one file behind the group.
System component log
System components also have logs, but they are divided into two categories: (1) components running inside the container, and (2) components running outside the container.
Components running outside the container, such as kubelet and Docker, etc.: if systemd management is used, logs are written to journald; if systemd management is not used, logs are written to / var/log
Components running in the container, such as kube-proxy or scheduler, etc.: write logs to / var/log using the default logging mechanism
Similarly, logs written to / var/log need to be rotated.
The solution of Cluster Log
Since Kubernetes does not provide a cluster logging solution, there are the following centralized solutions:
Using a node-level log agent, running in each node using a dedicated container (Sidecar) to collect application logs from the application and write the logs directly to the back-end log store
We will briefly describe the various solutions below.
Use Node Log Agent
Run the POD instance as DeamonSet on each node to read the log file directly. But this only applies to writing logs to containers of standard error and standard output.
Common solutions are Elasticsaerch+Fluentd services.
Use dedicated containers (Sidecar)
Using the Sidecar container (a container in the same POD instance as the application container), there are two ways: (1) the Sidecar container "flows" the application log into its own standard output; (2) the Sidecar container runs the log agent to collect application logs
# streaming Sidecar container:
When the Sidecar container uses its own standard error and standard output, it can take advantage of kubelet and the logging agent for each node. The Sidecar container can read files, sockets, journald, and then write logs to its own standard error, standard output.
This approach separates different log streams from different parts of the application, even though some applications may not support writing standard input and standard output. Log redirection only needs to process very small logs at a time, so there is no transitional overhead. In addition, because of Sidecar's standard output and standard error and kubelet processing, you can view the log through kubectl logs.
Although Sidecar is an extra container, it can be as simple as running the tail command. Sidecar is a pattern involved.
In addition, the node-level log agent automatically processes the logs without further configuration. You can also configure the collection agent to resolve based on the Source Container log type.
Although CPU usage is reduced, it increases disk usage. If your application needs to write the log to a file, try to write it to standard output instead of using the Sidecar container.
# Sidecar container with log agent:
If the node-level log agent cannot meet the requirements, you can run the log collection agent in the Sidecar container. The agent can be configured for specific applications. However, this type of Sidecar consumes more resources and cannot be used to view logs using kubectl logs.
Directly expose the log
The final method is to write the logs directly to the back-end storage, which is a log storage scheme, but has little to do with the Kubernetes cluster, which is not discussed here.
Finally, a summary.
In Kubernetes Cluster, there are several types of logs that need to be processed:
Out-of-container logs: kubelet, Docker in-container logs: (written standard input, standard error) container applications, including cluster components (such as kube-porxy,etcd, etc.) in-container logs: (unwritten standard input, standard error) container applications, write to the container locally
Use DaemonSet to run the POD instance and collect logs in the node.
The above is all the contents of the article "what are the knowledge points of the Kubernetes log?" Thank you for your reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.