Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction and configuration method of Prometheus-operator

2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "introduction and configuration methods of Prometheus-operator". In daily operation, I believe many people have doubts about the introduction and configuration of Prometheus-operator. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "introduction and configuration methods of Prometheus-operator"! Next, please follow the editor to study!

With the prevalence of cloud native concepts, monitoring of containers, services, nodes, and clusters is becoming more and more important. As the de facto standard of Kubernetes monitoring, Prometheus has powerful function and good ecology. But it does not support distributed, does not support data import and export, does not support the modification of monitoring targets and alarm rules through API, so when using it, it usually needs to write scripts and code to simplify the operation. Prometheus Operator provides a simple definition for monitoring the management of Kubernetes service, deployment, and Prometheus instances, simplifying the deployment, management, and operation of Prometheus and Alertmanager clusters on Kubernetes.

Function

Prometheus Operator (hereinafter referred to as Operater) provides the following functions:

Create / destroy: it is easier to start an instance of Prometheues in Kubernetes namespace, and it is easier for a particular application or team to use Operator.

Convenient configuration: configure the basic information of Prometheus through Kubernetes resources, such as version, storage, replica set, etc.

Tagging the target service through tags: automatically generates the monitoring target configuration based on common Kubernetes label queries; there is no need to learn the Prometheus specific configuration language.

precondition

For Prometheus Operator versions greater than 0.18.0, the Kubernetes cluster version is required to be higher than 1.8.0. If you are just starting to use Prometheus Operator, it is recommended that you use the latest version.

If you are using older versions of Kubernetes and Prometheus Operator that are still running, it is recommended that you upgrade Kubernetes first and then Prometheus Operator.

Install and uninstall Quick install

Install Prometheus Operator using helm. After installing with helm, the Prometheus cluster is created, configured, and managed in the Kubernetes cluster, and the chart contains a variety of components:

Prometheus-operator

Prometheus

Alertmanager

Node-exporter

Kube-state-metrics

Grafana

Monitoring service for collecting indicators of internal components of Kubernetes

Kube-apiserver

Kube-scheduler

Kube-controller-manager

Etcd

Kube-dns/coredns

Kube-proxy

Install a version of chart named my-release:

Helm install-name my-release stable/prometheus-operator

This installs a default configuration of prometheus-operator in the cluster. This configuration file lists the options that can be configured during installation.

Prometheus Operator, Alertmanager and Grafana are installed by default. And will grab the basic information of the cluster.

Unloading

Uninstall the my-release deployment:

Helm delete my-release

This command removes all Kubernetes components associated with this chart.

The CRDs created by this chart will not be deleted by default, but needs to be deleted manually:

Kubectl delete crd prometheuses.monitoring.coreos.comkubectl delete crd prometheusrules.monitoring.coreos.comkubectl delete crd servicemonitors.monitoring.coreos.comkubectl delete crd podmonitors.monitoring.coreos.comkubectl delete crd alertmanagers.monitoring.coreos.com architecture

The Prometheus Operator architecture diagram is as follows:

In the architecture diagram above, each component runs in a Kubernetes cluster in different ways:

Operator: deploy and manage Prometheus Server according to custom resources (Custom Resource Definition / CRDs), and monitor the changes of these custom resource events to deal with them accordingly. It is the control center of the whole system.

Prometheus: declares the state expected by Prometheus deployment, and Operator ensures that the deployment runtime is always consistent with the definition.

Prometheus Server: Operator is a Prometheus Server cluster deployed based on what is defined in the Prometheus type of custom resources, which can be thought of as StatefulSets resources for managing Prometheus Server clusters.

ServiceMonitor: declares the services to be monitored, describing a list of targets that are monitored by Prometheus. This resource selects the corresponding Service Endpoint through Labels, and lets Prometheus Server obtain Metrics information through the selected Service.

Service: to put it simply, it is the object that Prometheus monitors.

Alertmanager: defines the state expected by AlertManager deployment, and Operator ensures that the deployment runtime is always consistent with the definition.

Custom resource

Prometheus Operater defines the following four types of custom resources:

Prometheus

ServiceMonitor

Alertmanager

PrometheusRule

Prometheus

The Prometheus custom resource (CRD) declares the expected settings for the Prometheus running in the Kubernetes cluster. Includes configuration options such as the number of replicas, persistent storage, and Alertmanagers to which the Prometheus instance sends warnings.

For each Prometheus resource, the Operator will be deployed as a correctly configured StatefulSet,Prometheus under the same namespace. The Pod will mount a Secret called Secret, which contains the configuration of Prometheus. Operator generates the configuration based on the included ServiceMonitor, and updates the Secret that contains the configuration. Changes to ServiceMonitors or Prometheus are constantly updated in accordance with the previous steps.

A sample configuration is as follows:

Kind: Prometheusmetadata: # slightly spec: alerting: alertmanagers:-name: prometheus-prometheus-oper-alertmanager # defines the name of the Alertmanager cluster to which the Prometheus is connected. In the default namespace, namespace: default pathPrefix: / port: web baseImage: quay.io/prometheus/prometheus replicas: 2 # defines that there are two copies of the Proemtheus "cluster", which is called a cluster. In fact, Prometheus itself does not have the cluster function. Here are just two identical Prometheus to avoid a single point of failure ruleSelector: # defining this Prometheus requires the use of PrometheusRule matchLabels with prometheus=k8s and role=alert-rules tag: prometheus: K8s role: alert-rules serviceMonitorNamespaceSelector: {} # define the namespace in which these Prometheus look for ServiceMonitor serviceMonitorSelector: # define this Prometheus using ServiceMonitor with k8s-app=node-exporter tag If you do not declare, all matchLabels: k8s-app: node-exporter version: v2.10.0 will be selected.

Prometheus configuration

ServiceMonitor

ServiceMonitor custom resources (CRD) can declare how to monitor the definition of a set of dynamic services. It uses tag selection to define a set of services that need to be monitored. This allows organizations to introduce rules on how to expose metrics, and as long as these regulations are met, new services are found to be included in monitoring without the need to reconfigure the system.

To use Prometheus Operator to monitor applications in a Kubernetes cluster, an Endpoints object must exist. The Endpoints object is essentially a list of IP addresses. Typically, Endpoints objects are built by Service. The Service object discovers Pod through the object selector and adds them to the Endpoints object.

A Service can expose one or more service ports, which are typically supported by multiple Endpoints pointing to a Pod. This is also reflected in their respective Endpoints objects.

Prometheus Operator introduces the ServiceMonitor object, which discovers the Endpoints object and configures Prometheus to monitor these Pods.

The endpoints section of ServiceMonitorSpec is used to configure the port and other parameters of the Endpoints that needs to collect metrics. In some use cases, the ports of pods that are not in the service endpoints are directly monitored. Therefore, when you specify endpoint in the endpoints section, use it strictly and don't get confused.

Note: endpoints (lowercase) is a field in ServiceMonitor CRD, while Endpoints (uppercase) is the Kubernetes resource type.

The ServiceMonitor and the target found could come from any namespace. This is important for monitoring across namespace, such as meta-monitoring. Use ServiceMonitorNamespaceSelector under PrometheusSpec to limit the role of namespece in ServiceMonitors through their respective Prometheus server. Using namespaceSelector under ServiceMonitorSpec, you can now allow the discovery of the namespace of Endpoints objects. To find targets under all namespaces, namespaceSelector must be empty.

Spec: namespaceSelector: any: true

A sample configuration is as follows:

Kind: ServiceMonitormetadata: labels: k8s-app: node-exporter # this ServiceMonitor object has a k8s-app=node-exporter tag, so it will be selected by Prometheus name: node-exporter namespace: defaultspec: selector: matchLabels: # to define the Endpoints to be monitored Endpoints with app=node-exporter and k8s-app=node-exporter tag will be selected app: node-exporter k8s-app: node-exporter endpoints:-bearerTokenFile: / var/run/secrets/kubernetes.io/serviceaccount/token interval: 30s # defining these Endpoints requires fetching every 30 seconds targetPort: 9100 # defines the metric port of these Endpoints as 9100 scheme: https jobLabel: k8s-app

ServiceMonitor configuration

Alertmanager

The Alertmanager custom resource (CRD) declares the expected settings for the Alertmanager running in the Kubernetes cluster. It also provides options for configuring replica sets and persistent storage.

For each Alertmanager resource, Operator will be deployed as a correctly configured StatefulSet under the same namespace. The Alertmanager pods configuration mounts a Secret named alertmanager.yaml key pair as the configuration file.

When there are two or more copies of the configuration, Operator can run Alertmanager instances in high availability mode.

A sample configuration is as follows:

Kind: Alertmanager # An Alertmanager object metadata: name: prometheus-prometheus-oper-alertmanagerspec: baseImage: quay.io/prometheus/alertmanager replicas: 3 # defines the number of nodes in the Alertmanager cluster as 3 version: v0.17.0

Alertmanager configuration

PrometheusRule

PrometheusRule CRD declares the Prometheus rule required by one or more Prometheus instances.

Alerts and recording rules can be saved and applied as yaml files and can be loaded dynamically without restarting.

A sample configuration is as follows:

Kind: PrometheusRulemetadata: labels: # defines the label of the PrometheusRule, which is obviously selected by Prometheus. Prometheus: K8s role: alert-rules name: prometheus-k8s-rulesspec: groups:-name: k8s.rules rules: # defines a set of rules, of which there is only one alarm rule, which is used to alarm whether the kubelet has failed-alert: KubeletDown annotations: message: Kubelet has disappeared from Prometheus target discovery. Expr: | absent (up {job= "kubelet"} = = 1) for: 15m labels: severity: critical

PrometheusRule configuration

The relationship between them is shown below:

Advantages of Prometheus Operator

All API objects in Prometheus Operator are validated by Schema,API Server defined in CRD. When the developer uses ConfigMap to save the configuration without any verification and the configuration file is miswritten, it shows that the function is not available and the problem troubleshooting is complex. In Prometheus Operator, all configurations in Prometheus objects, ServiceMonitor objects and PrometheusRule objects are checked by Schema, and the failure of verification apply makes a direct error, which greatly reduces the risk of abnormal configuration.

Prometheus Operator uses K8S to platform Prometheus services. With API objects such as Prometheus and AlertManager, it is very simple and fast to create and manage Prometheus services and AlertManager services in K8S cluster to meet the monitoring needs of different business departments and different areas.

ServiceMonitor and PrometheusRule solve the problem that it is difficult to maintain the configuration of Prometheus. Developers no longer need to study the configuration files of Prometheus, and they no longer need to update the configuration files to Pod and trigger webhook hot updates by means of CI and K8s ConfigMap. They only need to modify these two object resources.

At this point, the study of "introduction and configuration of Prometheus-operator" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report