Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Prometheus Operator to realize self-defined Index Monitoring

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

How to use Prometheus Operator to achieve custom indicator monitoring, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.

In past articles, we spent a lot of time talking about surveillance. This is because when you are managing a Kubernetes cluster, everything changes at an extremely rapid rate. Therefore, it is extremely important to have a tool to monitor the health status and resource indicators of the cluster.

In Rancher 2.5, we introduced a new version of Prometheus Operator-based monitoring, which provides native Kubernetes deployment and management of Prometheus and related monitoring components. Prometheus Operator allows you to monitor the status and progress of cluster nodes, Kubernetes components, and application workloads. At the same time, it can define alarms and create custom dashboards through the metrics collected by Prometheus, and the collected metrics can be easily visualized through Grafana.

The new version of monitoring also uses prometheus-adapter, which allows developers to extend their workloads based on custom metrics and HPA.

We will explore how to use Prometheus Operator to capture custom metrics and use them for advanced workload management.

Install Prometheus

Installing Prometheus in Rancher 2.5 is extremely simple. Simply visit Cluster Explorer-> Apps and install rancher-monitoring.

You need to know the following default settings:

Prometheus-adapter will be enabled as part of the chart installation

ServiceMonitorNamespaceSelector is left blank, allowing Prometheus to collect ServiceMonitors in all namespaces

After the installation is complete, we can access the monitoring component from Cluster Explorer.

Deploy workload

Now let's deploy a sample workload that exposes custom metrics from the application layer. This workload exposes a simple application that has been detected using the Prometheus client_ gold library and provides some custom metrics on the / metric endpoint.

It has two indicators:

Http_requests_total

Http_request_duration_seconds

The following manifest deploys the workload, associated services, and ingress that accesses the workload:

ApiVersion: apps/v1kind: Deploymentmetadata: labels: app.kubernetes.io/name: prometheus-example-app name: prometheus-example-appspec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: prometheus-example-app template: metadata: labels: app.kubernetes.io/name: prometheus-example-appspec: containers:-name: prometheus-example-app image: gmehta3/demo-app:metrics Ports:-name: web containerPort: 8080---apiVersion: v1kind: Servicemetadata: name: prometheus-example-app labels: app.kubernetes.io/name: prometheus-example-appspec: selector: app.kubernetes.io/name: prometheus-example-app ports:-protocol: TCP port: 8080 targetPort: 8080 name: web---apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: prometheus-example-appspec: Rules:-host: hpa.demo http: paths:-path: / backend: serviceName: prometheus-example-app servicePort: 8080 deployment ServiceMonitor

ServiceMonitor is a custom resource definition (CRD) that allows us to declaratively define how to monitor a set of dynamic services.

You can visit the following link to see the complete ServiceMonitor specification:

Https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#servicemonitor

Now, let's deploy ServiceMonitor,Prometheus and use it to collect the pod that makes up the prometheus-example-app Kubernetes service.

Kind: ServiceMonitormetadata: name: prometheus-example-appspec: selector: matchLabels: app.kubernetes.io/name: prometheus-example-app endpoints:-port: web

As you can see, users can now browse ServiceMonitor in Rancher monitoring.

Soon, the new service monitor and the pod associated with the service should be reflected in the Prometheus service discovery.

We can also see indicators in Prometheus.

Deploy the Grafana dashboard

In Rancher 2.5, monitoring allows users to store Grafana dashboards as ConfigMaps in the cattle-dashboards namespace.

Users or cluster administrators can now add more dashboards to this namespace to extend Grafana's custom dashboards.

Dashboard ConfigMap ExampleapiVersion: v1kind: ConfigMapmetadata: name: prometheus-example-app-dashboard namespace: cattle-dashboards labels: grafana_dashboard: "1" data: prometheus-example-app.json: | {"annotations": {"list": [{"builtIn": 1, "datasource": "--Grafana -", "enable": true, "hide": true "iconColor": "rgba (0211,255,1)", "name": "Annotations & Alerts", "type": "dashboard"}]}, "editable": true, "gnetId": null, "graphTooltip": 0, "links": [], "panels": [{"aliasColors": {} "bars": false, "dashLength": 10, "dashes": false, "datasource": null, "fieldConfig": {"defaults": {"custom": {}, "overrides": []}, "fill": 1, "fillGradient": 0 "gridPos": {"h": 9, "w": 12, "x": 0, "y": 0}, "hiddenSeries": false, "id": 2, "legend": {"avg": false, "current": false, "max": false "min": false, "show": true, "total": false, "values": false}, "lines": true, "linewidth": 1, "nullPointMode": "null", "percentage": false, "pluginVersion": "7.1.5", "pointradius": 2 "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10, "stack": false, "steppedLine": false, "targets": [{"expr": "rate (http_requests_total {code=\" 200\ ", service=\" prometheus-example-app\ "} [5m])" "instant": false, "interval": "," legendFormat ":", "refId": "A"}], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "http_requests_total_200" "tooltip": {"shared": true, "sort": 0, "value_type": "individual"}, "type": "graph", "xaxis": {"buckets": null, "mode": "time", "name": null, "show": true "values": []}, yaxes: [{"format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true} {"format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true}], "yaxis": {"align": false "alignLevel": null}, {"aliasColors": {}, "bars": false, "dashLength": 10, "dashes": false, "datasource": null, "description": " "fieldConfig": {"defaults": {"custom": {}}, "overrides": []}, "fill": 1, "fillGradient": 0, "gridPos": {"h": 8, "w": 12, "x": 0 "y": 9}, "hiddenSeries": false, "id": 4, "legend": {"avg": false, "current": false, "max": false, "min": false, "show": true, "total": false "values": false}, "lines": true, "linewidth": 1, "nullPointMode": "null", "percentage": false, "pluginVersion": "7.1.5", "pointradius": 2, "points": false, "renderer": "flot", "seriesOverrides": [], "spaceLength": 10 "stack": false, "steppedLine": false, "targets": [{"expr": "rate (http_requests_total {code =\" 200\ ", service=\" prometheus-example-app\ "} [5m])", "interval": "," legendFormat ":" "refId": "A"}], "thresholds": [], "timeFrom": null, "timeRegions": [], "timeShift": null, "title": "http_requests_total_not_200", "tooltip": {"shared": true, "sort": 0 "value_type": "individual", "type": "graph", "xaxis": {"buckets": null, "mode": "time", "name": null, "show": true, "values": []} "yaxes": [{"format": "short", "label": null, "logBase": 1, "max": null, "min": null, "show": true}, {"format": "short" "label": null, "logBase": 1, "max": null, "min": null, "show": true}], "yaxis": {"align": false, "alignLevel": null}}], "schemaVersion": 26 "style": "dark", "tags": [], "templating": {"list": []}, "time": {"from": "now-15m", "to": "now"}, "timepicker": {"refresh_intervals": ["5s", "10s", "30s" "1m", "5m", "15m", "30m", "1h", "2h", "1D"]}, "timezone": "," title ":" prometheus example app "," version ": 1}

Users should now be able to access prometheus example app's dashboard in Grafana.

HPA of custom metrics

This section assumes that you have installed prometheus-adapter as part of your monitoring. In fact, by default, the monitoring installer installs prometheus-adapter.

Users can now create a HPA spec, as shown below:

ApiVersion: autoscaling/v2beta2kind: HorizontalPodAutoscalermetadata: name: prometheus-example-app-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: 1 maxReplicas: 5 metrics:-type: Object object: describedObject: kind: Service name: prometheus-example-app metric: name: http_requests target: averageValue: "5" type: AverageValue

You can check the following link for more information about HPA:

Https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

We will use custom http_requests_total metrics to perform pod automatic scaling.

Now we can generate a sample load to see how HPA is running. I can do the same with hey.

Hey-c 10-n 5000 http://hpa.demo

Developers and cluster administrators can use this stack to monitor their workloads, deploy visualization, and take advantage of the advanced workload management capabilities available within Kubernetes.

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report