Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Use prometheus to monitor the kubelet of each node and node of traefik, redis, k8s cluster

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

1. The data metrics of Prometheus are obtained through a public HTTP (S) data interface. We do not need to install a monitoring agent separately, only need to expose a metrics interface, and Prometheus will pull data regularly. For some ordinary HTTP services, we can directly reuse this service and add a / metrics interface to expose the Prometheus.

2. Even if the API is not natively integrated, some services can use some exporter to obtain metric data, such as mysqld_exporter and node_exporter,redis-exporter. These exporter are similar to agent in traditional monitoring services, which are used to collect metric data of target services and then directly expose them to Prometheus.

Monitor Traefik with metric interface

1. Modify the configuration file traefik.toml, add the following, and enable the metirc API.

[metrics] [metrics.prometheus] entryPoint = "traefik" buckets = [0.1,0.3,1.2,5.0]

2. Then update traefik configmap and traefik pod

$kubectl get configmap-n kube-system

Traefik-conf 1 83d

$kubectl delete configmap traefik-conf-n kube-system

$kubectl create configmap traefik-conf-- from-file=traefik.toml-n kube-system

$kubectl apply-f traefik.yaml

$kubectl get svc-n kube-system | grep traefik

Traefik-ingress-service NodePort 10.100.222.78 80:31657/TCP,8080:31572/TCP

$curl 10.100.222.78:8080/metrics

$curl 192.168.1.243:31572/metrics

3. Update the configuration file of prometheus and add job

ApiVersion: v1kind: ConfigMapmetadata: name: prometheus-config namespace: kube-opsdata: prometheus.yml: | global: scrape_interval: 30s scrape_timeout: 30s scrape_configs:-job_name: 'prometheus' static_configs:-targets: [' localhost:9090']-job_name: 'traefik' static_configs:-targets: [' traefik-ingress-service.kube-system.svc.cluster.local:8080']

$kubectl apply-f prome-cm.yaml # Update the prometheus configemap file

Since the servicename corresponding to Traefik here is traefik-ingress-service and is under the namespace of kube-system, the path configuration of our targets here needs to be in the form of FQDN: traefik-ingress-service.kube-system.svc.cluster.local

$kubectl get svc-n kube-ops | grep prometheus

Prometheus NodePort 10.102.197.83 9090:32619/TCP

$curl-X POST "http://192.168.1.243:32619/-/reload" # makes the configuration effective, which may take some time. Use the reload command to make the configuration effective without updating the prometheus pod.

Use redis-exporter to monitor redis services

Redis-exporter is deployed in the same pod as redis as sidecar

1. Build pod and svc

$docker pull redis:4

$docker pull oliver006/redis_exporter:latest

ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: redis namespace: kube-opsspec: template: metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "9121" labels: app: redis spec: containers:-name: redis image: redis:4 resources: requests: cpu: 100m memory : 100Mi ports:-containerPort: 6379-name: redis-exporter image: oliver006/redis_exporter:latest resources: requests: cpu: 100m memory: 100Mi ports:-containerPort: 9121---kind: ServiceapiVersion: v1metadata: name: redis namespace: kube-opsspec: selector: app: redis ports:-name: redis port: 6379 targetPort: 6379 -name: prom port: 9121 targetPort: 9121

$kubectl get svc-n kube-ops | grep redis

Redis ClusterIP 10.105.241.59 6379/TCP,9121/TCP

$curl 10.105.241.59:9121/metrics # to see if it can be monitored

2. Add job and update prometheus configmap configuration file

ApiVersion: v1kind: ConfigMapmetadata: name: prometheus-config namespace: kube-opsdata: prometheus.yml: | global: scrape_interval: 30s scrape_timeout: 30s scrape_configs:-job_name: 'prometheus' static_configs:-targets: [' localhost:9090']-job_name: 'traefik' static_configs:-targets: [' traefik-ingress-service.kube-system.svc.cluster. Local:8080']-job_name: 'redis' static_configs:-targets: [' redis:9121']

Since our redis service and Prometheus are in the same namespace, we use servicename directly

$kubectl apply-f prometheus-cm.yaml # update configuration

$kubectl get svc-n kube-ops | grep prometheus

Prometheus NodePort 10.102.197.83 9090:32619/TCP

$curl-X POST "http://10.102.197.83:9090/-/reload" # to make the configuration effective

Http://http://192.168.1.243:32619/targets

Use node-exporter to monitor the kubelet of all nodes and each node in a k8s cluster

1. Deploy node-exporter

The service is deployed through the DaemonSet controller so that each node automatically runs such a Pod

ApiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: node-exporter namespace: kube-ops labels: name: node-exporterspec: template: labels: name: node-exporterspec: hostPID: true hostIPC: true hostNetwork: true containers:-name: node-exporter image: prom/node-exporter:v0.16.0 ports:-containerPort: 9100 resources: Requests: cpu: 0.15 securityContext: privileged: true args:-path.procfs-/ host/proc-- path.sysfs-/ host/sys-collector.filesystem.ignored-mount-points -'"^ / (sys | proc | dev | host | etc) ($| /)" 'volumeMounts: -name: dev mountPath: / host/dev-name: proc mountPath: / host/proc-name: sys mountPath: / host/sys-name: rootfs mountPath: / rootfs tolerations:-key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule" volumes: -name: proc hostPath: path: / proc-name: dev hostPath: path: / dev-name: sys hostPath: path: / sys-name: rootfs hostPath: path: /

Since the data we want to obtain is the monitoring metric data of the host, and our node-exporter runs in the container, we need to configure some Pod security policies in Pod. Here we add hostPID: true, hostIPC: true, hostNetwork: true3 policies to use the host's PID namespace, IPC namespace and host network. These namespace are the key technologies for container isolation. Note that namespace here and namespace in the cluster are two completely different concepts.

Because hostNetwork=true is specified, a port 9100 will be bound on each node, through which we can obtain monitoring metric data.

$curl 127.0.0.1:9100/metrics

$curl 127.0.0.1:10255/metrics # Monitoring kubelet through port 10255

2. Add job and update prometheus configuration

ApiVersion: v1kind: ConfigMapmetadata: name: prometheus-config namespace: kube-opsdata: prometheus.yml: | global: scrape_interval: 15s scrape_timeout: 15s scrape_configs:-job_name: 'prometheus' static_configs:-targets: [' localhost:9090']-job_name: 'traefik' static_configs:-targets: [' traefik-ingress-service.kube-system.svc.cluster. Local:8080']-job_name: 'redis' static_configs:-targets: [' redis:9121']-job_name: 'kubernetes-nodes' kubernetes_sd_configs:-role: node relabel_configs:-source_labels: [_ _ address__] regex:' (. *): 10250' replacement:'${1}: 9100' Target_label: _ _ address__ action: replace-action: labelmap regex: _ _ meta_kubernetes_node_label_ (. +)-job_name: 'kubernetes-kubelet' kubernetes_sd_configs:-role: node relabel_configs:-source_labels: [_ address__] regex:' (. *): 10250 'replacement:' ${1}: 10255' target_label: _ _ address__ action: replace-action: labelmap regex: _ _ meta_kubernetes_node_label_ (. +)

Under Kubernetes, Promethues mainly supports five service discovery modes through integration with Kubernetes API, namely: Node, Service, Pod, Endpoints and Ingress.

The default listening ports of kubelet are 10250, 10255, and 10248, respectively.

$vim / var/lib/kubelet/config.yaml

HealthzPort: 10248

Port: 10250

ReadOnlyPort: 10255

When prometheus discovers a service in Node mode, the access port is 10250 by default, but now there is no / metrics metric data under this port. Now the read-only data interface of kubelet is exposed through port 10255, so we should replace the port here, but do we want to replace port 10255? No, because we are going to configure the node metric data captured through node-exporter above, and whether we specify hostNetwork=true above, so we will bind a port 9100 on each node, so we should replace 10250 here with 9100.

$kubectl apply-f prometheus-cm.yaml

$kubectl get svc-n kube-ops | grep prometheus

Prometheus NodePort 10.102.197.83 9090:32619/TCP

$curl-X POST "http://10.102.197.83:9090/-/reload" # to make the configuration effective

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report