Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

6 Best practices for Kubernetes Log

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article is to share with you about the six best practices of the Kubernetes log. The editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.

Kubernetes can help manage the lifecycle of hundreds of containers deployed in Pod. It is highly distributed and the parts are dynamic. An implemented Kubernetes environment usually involves several systems with clusters and nodes that host hundreds of containers that are constantly started and destroyed based on the workload.

When dealing with a large number of containerized applications and workloads in Kubernetes, it is important to proactively monitor and debug errors. At the container, node, or cluster level, these errors can be seen in the container. Kubernetes's logging mechanism is a very important component that can be used to manage and monitor services and infrastructure. In Kubernetes, logging allows you to track errors and even adjust the performance of the container that hosts the application.

Configure stdout (standard output) and stderr (standard error) data streams

The first step is to understand how logs are generated. With Kubernetes, the log is sent to two data streams-stdout and stderr. These data streams are written to the JSON file, and this process is handled internally by Kubernetes. You can configure which log is sent to which data stream. A best practice recommendation is to send all application logs to stdout and all error logs to stderr.

Decide whether to use the Sidecar model

Kubernetes recommends that you use the sidecar container to collect logs. In this approach, each application container will have an adjacent "streaming container" that will transfer all log streams to stdout and stderr. The Sidecar model helps avoid exposing logs at the node level, and it allows you to control container-level logs.

However, the problem with this model is that it can be applied to small-capacity logging, which may cause a lot of resources to be occupied in the face of large-scale logging. Therefore, you need to run a separate log container for each running application container. In the Kubernetes documentation, the sidecar model is described as "with almost no significant overhead". It is up to you to decide whether to try the model and check the type of resources it consumes before selecting it.

The alternative is to use a logging agent, which collects logs at the node level. This reduces overhead and ensures that logs are handled securely. Fluentd has become the best choice for large-scale aggregation of Kubernetes logs. It acts as a bridge between Kubernetes and any number of endpoints where you want to use Kubernetes logs. You can also choose a Kubernetes management platform like Rancher, which already integrates Fluentd in the app store without having to install the configuration from scratch.

After determining that Fluentd can better summarize and route log data, the next step is to determine how to store and analyze the log data.

Select log analysis tool: EFK or dedicated logging

Traditionally, for local server-centric systems, application logs are stored in log files in the system. These files can be seen in a defined location or moved to a central server. For Kubernetes, however, all logs are sent to the JSON file on disk / var/log. This type of log aggregation is not secure because the Pod in a node can be temporary or temporary. When you delete a Pod, the log file is lost. This can be difficult if you need to try to troubleshoot some of the lost log data.

Kubernetes officially recommends two options: send all logs to Elasticsearch, or use the third-party logging tool of your choice. Again, there is a potential option. Taking the Elasticsearch route means you need to buy a complete stack, the EFK stack, including Elasticsearch, Fluentd, and Kibana. Each tool has its own function. As mentioned above, Fluentd can aggregate and route logs. Elasticsearch is a powerful platform for analyzing raw log data and providing readable output. Kibana is an open source data visualization tool that can create beautiful custom dashboard from your log data. This is a completely open source stack and is a powerful solution for logging using Kubernetes.

Still, there are some things to keep in mind. Elasticsearch is not only built and maintained by an organization called Elastic, but also contributed by a large open source community of developers. Although it can deal with large-scale data query quickly and powerfully after a large number of practical tests, some problems may occur in large-scale operations. If you are using a self-managed (Self-managed) Elasticsearch, then someone needs to understand how to build a large-scale platform.

The alternative is to use cloud-based log analysis tools to store and analyze Kubernetes logs. Tools such as Sumo Logic and Splunk are good examples. Some of these tools use Fluentd to route logs to their platforms, while others may have their own custom logging agents, which are located at the node level in Kubernetes. The setup of these tools is very simple, and using these tools can take the least time to build a dashboard that can view logs from scratch.

Use RBAC to control access to logs

In Kubernetes, the authentication mechanism uses role-based access control (RBAC) to verify a user's access and system permissions. Comments the audit log generated during the operation based on whether the user has privileges (authorization.k8s.io/decision) and grants the user a reason (authorization.k8s.io/reason). By default, the audit log is not activated. It is recommended that you activate it to track authentication issues, and you can set it using kubectl.

Keep the log format consistent

Kubernetes logs are generated by different parts of the Kubernetes schema. The logs for these aggregations should be in a consistent format so that log aggregation tools such as Fluentd or FluentBit can handle them more easily. For example, you need to keep this in mind when configuring stdout and stderr or when using Fluentd to allocate tags and metadata. After this structured log is provided to Elasticsearch, the latency during log analysis can be reduced.

Set resource limits on the log collection daemon

Because a large number of logs are generated, it is difficult to manage logs at the cluster level. DaemonSet is used in Kubernetes in a similar way to Linux. It runs in the background to perform specific tasks. Fluentd and filebeat are two daemons supported by Kubernetes for log collection. We must set resource limits for each daemon to optimize the collection of log files based on available system resources.

Kubernetes contains multiple layers and components, so good monitoring and tracking of it allows us to take our time in the face of failures. Kubernetes encourages the use of seamlessly integrated external "Kubernetes native" tools for logging, making it easier for administrators to access logs. The practices mentioned in this article are important for having a robust logging architecture that works in any case. They consume computing resources in an optimized way and maintain the security and high performance of the Kubernetes environment.

These are the six best practices of the Kubernetes log, and the editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report