In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what are the knowledge points of Kubernetes log collection and monitoring alarm". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what are the knowledge points of Kubernetes log collection and monitoring alarm?"
1. Collection and management of container logs
Log collection scenario
Log collection scenarios are mainly divided into the following four categories:
Cluster core component log:
Kube-apiserver logs are required for audit, kube-scheduler logs for diagnosis and scheduling, and Ingress logs for access layer traffic analysis.
Host kernel log:
Kernel logs can be used to help developers and operation and maintenance students diagnose anomalies that affect node stability, such as file system exceptions, network stack exceptions, device driver exceptions and so on.
Application runtime log:
Docker is the most common container runtime, and you can use Docker and Kubelet logs to troubleshoot Pod creation and startup failures.
Business Application Log:
By analyzing the operation log of the business and observing the state of the business, the abnormal is diagnosed.
Log collection index
Kubernetes expects to handle container logs as follows: cluster-level log processing (cluster-level-logging)
That is, it has nothing to do with container, Pod, and node life cycle.
For a container, when the application outputs logs to stdout and stderr, docker outputs these logs to a JSON file on the host by default.
Log collection mode
Kubernetes itself does not do any log collection for users. In order to realize the cluster-level log processing, it is necessary to plan the log collection and management in advance before the cluster.
Kubernetes itself recommends three logging schemes.
Log collection method 1: use node-level log agent
The core is logging-agent (fluentd,etc)
Logging-agent runs on nodes in DaemonSet mode
Mount the container log directory on the host
Forward logs to backend storage (ElasticSearch, etc)
Advantages: there is no intrusion into applications and Pod, and only one agent needs to be deployed for a node.
Disadvantages: the application log is required to be output directly to the container's stdout and stderr.
Log collection method 2: use sidecar container and log agent
The container outputs all or part of the log to a file
One or more sidecar containers deliver application logs to their own stdout and stderr.
Advantages: can continue to use log collection mode 1.
Disadvantages: multiply the disk footprint, resulting in waste. (the application and the sidecar container write to the same log file)
Log collection method 3: use sidecar container with log agent function
It is equivalent to integrating logging-agent directly into Pod.
Apply and output logs to stdout&stderr or files.
The input source of Logging-agent is the application log file.
Advantages: simple deployment and host-friendly.
Disadvantages: 1. The Sidecar container may consume more resources, or even drag the application container.
two。 The container log cannot be viewed using the kubectl logs command.
Summary:
There are three ways to achieve cluster-level log collection:
Use the node-level logging agent.
Use the sidecar container and log agent.
Use a sidecar container with log agent functionality.
Suggestion: use solution 1 to output the application log to stdout&stderr, and process the log centrally by deploying logging-agent directly on the host.
Easy to manage.
You can use the kubectl logs command to view the log.
The host itself may already have mature log collection components such as rstlogd available.
Type selection recommendation
two。 Collection and Management of Container Monitoring Index
Monitoring scene
According to the type of monitoring, it can be divided into the following scenarios:
Resource monitoring:
CPU, memory, network and other resource indicators are often counted in terms of numerical value and percentage, which is the most common way of resource monitoring.
Performance monitoring:
Internal control of the application. Usually the Hock mechanism is implicitly callback in the virtual machine layer, bytecode execution layer, or explicitly injected in the application layer to obtain deeper monitoring indicators, which are often used for diagnosis and tuning.
For example, through the Hock mechanism, Jvm gets some indicators similar to the number of garbage collections in Jvm, the distribution of various memory bands and the number of network connections. The application is diagnosed and tuned in this way.
Security Monitoring:
Carry out a series of monitoring strategies for security, such as ultra vires management, security vulnerability scanning and so on.
Event Monitoring:
The unique monitoring mode in Kubernetes is in line with the design concept of Kubernetes as a supplement to the conventional monitoring scheme.
Why is event monitoring in line with Kubernetes design philosophy? This is because one of the design concepts of Kubernetes is state machine-based state transition. When you transition from a normal state to another state, a Normal-level event (that is, a normal event) occurs, while when you transition from a normal state to an abnormal state, the platform triggers a Warning-level (that is, warning-level events). Usually, Warning-level events are events that we care about.
Event monitoring can store Normal-level events or Warning-level events offline to the data center, and then through the analysis and alarm of the data center, the corresponding anomalies will be exposed by SMS and e-mail to make up for the disadvantages of conventional monitoring.
The Origin and present situation of Prometheus
Like Kubernetes, Prometheus comes from the Borg system. The prototype is called BorgMon, which is an internal control system born at the same time as Borg. The reason for the launch of the Prometheus project is similar to that of Kubernetes, hoping to convey the design concept of the Google internal system to developers and users in a more user-friendly way.
Kubernetes monitoring system used to be very complicated, but today it has evolved into a set of unified scheme with Prometheus as the core.
The structure and working Mode of Prometheus
Prometheus indicator source
The monitoring data of the host: it needs to be exposed with the help of Node Exporter; instead of the monitored object, Exporter exposes the index information that can be captured to the Prometheus.
/ metrics AP of Kubernetes components such as APIServer, kubelet, etc.: in addition to CPU and memory, it also includes the core monitoring indicators of each component.
Monitoring data of Kubernetes core: including metrics of major core concepts such as Pod, Node, container, Service, etc., in which container-related metrics come from kubectl's built-in cAdvisor service.
Characteristics of Prometheus
Simple and powerful access standard. As long as the Promethus Client interface is implemented, the data acquisition can be realized directly.
A variety of data acquisition methods. Including: online, offline, push, pull federated way for data acquisition.
Fully compatible with Kubernetes.
Rich plug-in mechanism and ecology.
Prometheus Operator help. Automate the operation and maintenance of Prometheus.
Thank you for your reading. The above is the content of "what are the knowledge points of Kubernetes log collection and monitoring alarm". After the study of this article, I believe you have a deeper understanding of what are the knowledge points of Kubernetes log collection and monitoring alarm, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.