Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the resource indicator API and custom indicator API of Kubernetes

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about Kubernetes's resource indicator API and what the custom indicator API is. Many people may not know much about it. In order to make you understand better, the editor has summarized the following contents for you. I hope you can get something according to this article.

In the past, heapster was used to collect resource indicators, but now heapster is going to be abandoned.

Resource api indicator monitoring has been introduced since 1.8,

Resource indicator: metrics-server (core indicator)

Custom Metrics: prometheus,k8s-prometheus-adapter (convert the data collected by Prometheus into metrics format)

Prometheus in K8s needs to be converted by k8s-prometheus-adapter before it can be used.

Next-generation architecture:

Core indicator pipeline:

Kubelet,metrics-service and API service provide api components; cpu cumulative utilization, real-time memory utilization, pod resource utilization and container disk utilization

Monitoring pipeline:

Used to collect various metrics data from the system and provide end users, storage systems and HPA, including core metrics and many non-core metrics, which cannot be parsed by K8s

Copy the code

Chapter 2, installation and deployment of metrics-server

1. Download the yaml file and install

Project address: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server, select the branch corresponding to the version. Mine is v1.10.0, so I choose v1.10.0 branch here.

[root@k8s-master_01 manifests] # mkdir metrics-server [root@k8s-master_01 manifests] # cd metrics-server [root@k8s-master_01 metrics-server] # for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/cluster/addons/metrics-server/$file; Done # remember, download the file in raw format [root@k8s-master_01 metrics-server] # grep image:. / * # to view the image used. If you can access the public network, ignore it. If it is not available, you need to download it in advance. Load the image by changing the configuration file or the name of the image. Images can be searched on Ali Cloud. / metrics-server-deployment.yaml: image: k8s.gcr.io/metrics-server-amd64:v0.2.1./metrics-server-deployment.yaml: image: k8s.gcr.io/addon-resizer:1.8.1 [root@k8s-node_01 ~] # docker pull registry.cn-hangzhou.aliyuncs.com/criss/addon-resizer:1.8.1 # download images manually on all node nodes Note that the version number does not have v [root@k8s-node_01 ~] # docker pull registry.cn-hangzhou.aliyuncs.com/k8s-kernelsky/metrics-server-amd64:v0.2.1 [root@k8s-master_01 metrics-server] # grep image: metrics-server-deployment.yaml image: registry.cn-hangzhou.aliyuncs.com/k8s-kernelsky/metrics-server-amd64:v0.2.1 image: registry.cn-hangzhou.aliyuncs.com/criss/addon-resizer: 1.8.1 [root@k8s-master_01 metrics-server] # kubectl apply-f. [root@k8s-master_01 metrics-server] # kubectl get pod-n kube-system

2. Verification

[root@k8s-master01 ~] # kubectl api-versions | grep metricsmetrics.k8s.io/v1beta1 [root@k8s-node01 ~] # kubectl proxy-- port=8080 # reopen a terminal Start the agent function [root@k8s-master_01 metrics-server] # curl http://localhost:8080/apis/metrics.k8s.io/v1beta1 # to see which components this resource group contains [root@k8s-master_01 metrics-server] # curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods # may need to wait a while before there is data [root@k8s-master_01 metrics-server] # curl http://localhost:8080 / apis/metrics.k8s.io/v1beta1/nodes [root@k8s-node01 ~] # kubectl top nodeNAME CPU (cores) CPU% MEMORY (bytes) MEMORY% k8s-master01 176m 4 3064Mi 39% k8s-node01 62m 1% 4178Mi 54% k8s-node02 65m 1 2141Mi 27 % [root@k8s-node01 ~] # kubectl top podsNAME CPU (cores) MEMORY (bytes) node-affinity-pod 0m 1Mi

3. Matters needing attention

1.# in newer versions, such as v1.11 and above, there will be problems because metric-service acquires data from kubernetes's summary_api by default, while summary_api uses port 10255 by default to obtain data, but 10255 is a http protocol port, so officials may consider http protocol unsecure so port 10255 is blocked to use port 10250, while 10250 is a https protocol port So we need to change the connection method: from-source=kubernetes.summary_api:'' to-source=kubernetes.summary_api: https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure-true # means that although I use the https protocol to communicate, and the port is also 10250 However, if the certificate cannot be authenticated, you can still communicate in a non-secure and unencrypted way [root@k8s-node01 deploy] # grep source=kubernetes metrics-server-deployment.yaml2. [root@k8s-node01 deploy] # grep nodes/stats resource-reader.yaml # in the new version, there is no node/stats permission in the authorization text You need to manually add [root@k8s-node01 deploy] # cat resource-reader.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: system:metrics-serverrules:- apiGroups:-"" resources:-pods-nodes-nodes/stats # to add this line-namespaces3. Testing in version 1.12.3 found that the following modifications are required for successful deployment (permissions still need to be modified Other versions have not been tested yet) [root@k8s-master-01 metrics-server] # vim metrics-server-deployment.yamlcommand: # metrics-server command parameter is modified to the following parameter-/ metrics-server- metric-resolution=30s-- kubelet-port=10250-kubelet-insecure-tls-- kubelet-preferred-address-types=InternalIPcommand: # metrics-server-nanny command parameter is modified to the following parameter-/ pod_nanny -config-dir=/etc/config-cpu=40m-extra-cpu=0.5m-memory=40Mi-extra-memory=4Mi-threshold=5-deployment=metrics-server-v0.3.1-container=metrics-server-poll-period=300000-estimator=exponential

Chapter III, installation and deployment of prometheus

Project address: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus( since prometheus is only available at v1.11.0 and above, so I chose v1.11.0 to deploy)

1. Download yaml file and pre-deployment operation [root@k8s-node01 ~] # cd / mnt/ [root@k8s-node01 mnt] # git clone https://github.com/kubernetes/kubernetes.git # I find it troublesome to clone the whole kubernetes project directly [root@k8s-node01 mnt] # cd kubernetes/cluster/addons/prometheus/ [root@k8s-node01 prometheus] # git checkout v1.11.0 [root@k8s-node01 prometheus] # cd. [root@k8s-node01 addons] # Cp-r prometheus/ root/manifests/ [root@k8s-node01 manifests] # cd prometheus/ [root@k8s-node01 prometheus] # grep-w "namespace: kube-system". / * # the default prometheus uses the kube-system namespace We deploy it separately to a namespace to facilitate later management. / alertmanager-configmap.yaml: namespace: kube-system. [root@k8s-node01 prometheus] # sed-I 's/namespace: kube-system/namespace\: K8sMurmonitorUnip g'. / * [root@k8s-node01 prometheus] # grep storage:. / * # two pv are required for installation Later we need to create. / alertmanager-pvc.yaml: storage: "2Gi". / prometheus-statefulset.yaml: storage: "16Gi" [root@k8s-node01 prometheus] # cat pv.yaml # Note the second pv's storageClassNameapiVersion: v1kind: PersistentVolumemetadata: name: alertmanager spec: capacity: storage: 5Gi accessModes:-ReadWriteOnce-ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: / data/volumes/v1 server: 172.16. 150.158---apiVersion: v1kind: PersistentVolumemetadata: name: standardspec: capacity: storage: 25Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: standard # storageClassName is consistent with the needs defined under volumeClaimTemplates in prometheus-statefulset.yaml nfs: path: / data/volumes/v2 server: 172.16.150.158 [root@k8s-node01 prometheus] # kubectl create namespace k8s-monitor [root@k8s-node01 prometheus] # mkdir node-exporter kube-state-metrics alertmanager prometheus # each The components are placed in a separate directory Easy to deploy and manage [root@k8s-node01 prometheus] # mv node-exporter-* node-exporter [root@k8s-node01 prometheus] # mv alertmanager-* alertmanager [root@k8s-node01 prometheus] # mv kube-state-metrics-* kube-state-metrics [root@k8s-node01 prometheus] # mv prometheus-* prometheus

two。 Install node-exporter (used to collect data metrics for nodes)

[root@k8s-node01 prometheus] # grep-r image: node-exporter/*node-exporter/node-exporter-ds.yml: image: "prom/node-exporter:v0.15.2" # unofficial image, which can also be downloaded if it is not available on the public network So there is no need to download in advance [root@k8s-node01 prometheus] # kubectl apply-f node-exporter/daemonset.extensions "node-exporter" createdservice "node-exporter" created [root@k8s-node01 prometheus] # kubectl get pod-n k8s-monitor NAME READY STATUS RESTARTS AGEnode-exporter-l5zdw 1 k8s-monitor NAME READY STATUS RESTARTS AGEnode-exporter-l5zdw 1 Running 0 1mnode-exporter-vwknx 1 Running 0 1m

3. Install prometheus

[root@k8s-master_01 prometheus] # kubectl apply-f pv.yaml persistentvolume "alertmanager" configuredpersistentvolume "standard" created [root@k8s-master_01 prometheus] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEalertmanager 5Gi RWO RWX Recycle Available 9sstandard 25Gi RWO Recycle Available 9s [root@k8s-node01 prometheus] # grep-I image prometheus/* # check whether the image needs to download [root@k8s-node01 prometheus] # vim prometheus-service.yaml # the service port type of the default prometheus is ClusterIP To be accessible outside the cluster, change it to NodePort... Type: NodePort ports:-name: http port: 9090 protocol: TCP targetPort: 9090 nodePort: 30090... [root@k8s-node01 prometheus] # kubectl apply-f prometheus/ [root@k8s-node01 prometheus] # kubectl get pod-n k8s-monitor NAME READY STATUS RESTARTS AGEnode-exporter-l5zdw 1 k8s-monitor NAME READY STATUS RESTARTS AGEnode-exporter-l5zdw 1 Running 0 24mnode-exporter-vwknx 1 max 1 Running 0 24mprometheus-0 2 Running 0 1m [root@k8s-node01 prometheus] # kubectl get svc-n k8s-monitor NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEnode-exporter ClusterIP None 9100/TCP 25mprometheus NodePort 10.96.9.121 9090:30090/TCP 22m [root@k8s-master_01 prometheus ] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEalertmanager 5Gi RWO RWX Recycle Available 1hstandard 25Gi RWO Recycle Bound k8s-monitor/prometheus-data-prometheus-0 standard 1h

Access prometheus (node node IP: Port)

4. Deploy metrics adapters (convert prometheus data into data recognized by k8s)

[root@k8s-node01 kube-state-metrics] # grep image:. / *. / kube-state-metrics-deployment.yaml: image: quay.io/coreos/kube-state-metrics:v1.3.0./kube-state-metrics-deployment.yaml: image: k8s.gcr.io/addon-resizer:1.7 [root@k8s-node02 ~] # docker pull registry.cn-hangzhou.aliyuncs.com/ccgg/addon-resizer:1.7 [root@ K8s-node01 kube-state-metrics] # vim kube-state-metrics-deployment.yaml # modify the image address [root@k8s-node01 kube-state-metrics] # kubectl apply-f kube-state-metrics-deployment.yamldeployment.extensions "kube-state-metrics" configured [root@k8s-node01 kube-state-metrics] # kubectl get pod-n k8s-monitor NAME READY STATUS RESTARTS AGEkube-state-metrics-54849b96b4-dmqtk 2 / 2 Running 0 23snode-exporter-l5zdw 1/1 Running 0 2hnode-exporter-vwknx 1/1 Running 0 2hprometheus-0 2/2 Running 0 1h

5. Deploy k8s-prometheus-adapter (output data as an API service)

Project address: https://github.com/DirectXMan12/k8s-prometheus-adapter

[root@k8s-master01 ~] # cd / etc/kubernetes/pki/ [root@k8s-master01 pki] # (umask 077 Openssl genrsa-out serving.key 2048) [root@k8s-master01 pki] # openssl req-new-key serving.key-out serving.csr-subj "/ CN=serving" # CN must be serving [root@k8s-master01 pki] # openssl x509-req-in serving.csr-CA. / ca.crt-CAkey. / ca.key-CAcreateserial-out serving.crt-days 3650 [root@k8s-master01 pki] # kubectl create secret generic cm-adapter-serving-certs-- from-file=serving.crt=./serving.crt -- from-file=serving.key=./serving.key-n k8s-monitor # certificate name must be cm-adapter-serving-certs [root@k8s-master01 pki] # kubectl get secret-n k8s-monitor [root@k8s-master01 pki] # cd [root@k8s-node01 ~] # git clone https://github.com/DirectXMan12/k8s-prometheus-adapter.git[root@k8s-node01 ~] # cd k8s-prometheus-adapter/deploy/manifests/ [root@k8s-node01 manifests] # grep namespace: . / * # change the name of namespace other than processing role-binding to k8s-monitor [root@k8s-node01 manifests] # grep image:. / * # Image does not need to download [root@k8s-node01 ~] # sed-I 's/namespace\: custom-metrics/namespace\: K8s pay monitor.g'. / * # rolebinding do not replace [root@k8s-node01 ~] # kubectl apply-f. / [root@k8s-node01 ~] # kubectl get pod- N k8s-monitor [root@k8s-node01 ~] # kubectl get svc-n k8s-monitorkubectl api-versions | grep custom

Chapter IV, deployment of prometheus+grafana

[root@k8s-master01 ~] # wget https://raw.githubusercontent.com/kubernetes-retired/heapster/master/deploy/kube-config/influxdb/grafana.yaml # couldn't find the yaml file of grafana, so I took one out of heapster and commented out the influxdb environment variable with [root@k8s-master01 ~] # egrep-I "influxdb | namespace | nodeport" grafana.yaml # Modify namespace and port type [root@k8s-master01 ~] # kubectl apply-f grafana.yaml [root@k8s-master01 ~] # kubectl get svc-n k8s-monitor [root@k8s-master01 ~] # kubectl get pod-n k8s-monitor

Log in to grafana and modify the data source

Configure the data source

Click Dashborads on the right to import the template of prometheus that comes with grafana.

Go back to home and select the corresponding template to view the data.

For example:

However, the template that comes with grafana does not match the data. We can go to the official website of grafana to download the template for k8s. The address is https://grafana.com/dashboards.

Visit the official website of grafana to search for k8s related templates. Sometimes there is no response when you click on the search box. You can simply add the search content after URL.

We chose kubernetes cluster (prometheus) as the test

Click the template you need to download and download the json file

When the download is complete, import the file

Choose to upload a file

Select data source after import

The interface displayed after import

Chapter V, implementation of HPA

1. Use v1 version to test

[root@k8s-master01 alertmanager] # kubectl api-versions | grep autoscalingautoscaling/v1autoscaling/v2beta1 [root@k8s-master01 manifests] # cat deploy-demon.yamlapiVersion: v1kind: Servicemetadata: name: myapp namespace: selector: app: myapp type: NodePort ports:-name: http port: 80 targetPort: 80 nodePort: 32222---apiVersion: apps/v1kind: Deploymentmetadata: name: myapp-deployspec: replicas: 2 selector: matchLabels: app: myapp template: metadata: Labels: app: myapp spec: containers:-name: myapp image: ikubernetes/myapp:v2 ports:-name: httpd containerPort: 80 resources: requests: memory: "64Mi" cpu: "100m" limits: memory: "128Mi" cpu: "200m "[root@k8s-master01 manifests] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 47dmy-nginx NodePort 10.104.13.148 80:32008/TCP 19dmyapp NodePort 10.100.76.180 80:32222/TCP 16stomcat ClusterIP 10.106.222.72 8080/TCP 8009/TCP 19d [root@k8s-master01 manifests] # kubectl get podNAME READY STATUS RESTARTS AGEmyapp-deploy-5db497dbfb-h7zcb 1/1 Running 0 16smyapp-deploy-5db497dbfb-tvsf5 1/1 Running 0 16s

test

[root@k8s-master01 manifests] # kubectl autoscale deployment myapp-deploy-- min=1-- max=8-- cpu-percent=60deployment.apps "myapp-deploy" autoscaled [root@k8s-master01 manifests] # kubectl get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEmyapp-deploy Deployment/myapp-deploy / 60% 18022s [root@k8s-master01 pod-dir] # yum install http-tools-y [ Root@k8s-master01 pod-dir] # ab-c 1000-n 5000000 http://172.16.150.213:32222/index.html[root@k8s-master01 ~] # kubectl describe hpa Name: myapp-deployNamespace: defaultLabels: Annotations: CreationTimestamp: Sun 16 Dec 2018 20:34:41 + 0800Reference: Deployment/myapp-deployMetrics: (current / target) resource cpu on pods (as a percentage of request): 178% (178m) / 60%Min replicas: 1Max replicas: 8Conditions: Type Status Reason Message-AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization ( Percentage of request) ScalingLimited True ScaleUpLimit the desired replica count is increasing faster than the maximum scale rateEvents: Type Reason Age From Message-Normal SuccessfulRescale 19m horizontal-pod-autoscaler New size: 1 Reason: All metrics below target Normal SuccessfulRescale 2m horizontal-pod-autoscaler New size: 2 Reason: cpu resource utilization (percentage of request) above target [root@k8s-master01 ~] # kubectl get podNAME READY STATUS RESTARTS AGEmyapp-deploy-5db497dbfb-6kssf 1 2mmyapp-deploy-5db497dbfb-h7zcb 1 Running 0 2mmyapp-deploy-5db497dbfb-h7zcb 1 pound 1 Running 0 24m [root@k8s-master01 ~] # kubectl get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEmyapp-deploy Deployment/myapp-deploy 178% Universe 60% 1 8 2 20m

2. Use v2beat1

[root@k8s-master01 pod-dir] # cat hpa-demo.yaml apiVersion: autoscaling/v2beta1kind: HorizontalPodAutoscalermetadata: name: myapp-hpa-v2spec: scaleTargetRef: apps/v1 kind: Deployment name: myapp-deploy minReplicas: 1 maxReplicas: 10 metrics:-type: Resource resource: name: cpu targetAverageUtilization: 55-type: Resource resource: name: memory targetAverageValue: 100Mi [root@k8s-master01 pod-dir] # kubectl delete hpa myapp-deploy horizontalpodautoscaler.autoscaling "myapp-deploy" deleted [root@k8s-master01 pod-dir] # kubectl apply-f hpa-demo.yaml horizontalpodautoscaler.autoscaling "myapp-hpa-v2" created [root@k8s-master01 pod-dir] # kubectl get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEmyapp-hpa-v2 Deployment/myapp-deploy / 100Mi / 55% 1 100 06s

test

[root@k8s-master01 ~] # kubectl describe hpa Name: myapp-hpa-v2Namespace: defaultLabels: Annotations: kubectl.kubernetes. Io/last-applied-configuration= {"apiVersion": "autoscaling/v2beta1" "kind": "HorizontalPodAutoscaler", "metadata": {"annotations": {}, "name": "myapp-hpa-v2", "namespace": "default"}, "spec": {... CreationTimestamp: Sun 16 Dec 2018 21:07:25 + 0800Reference: Deployment/myapp-deployMetrics: (current / target) resource memory on pods: 1765376 / 100Mi resource cpu on pods (as a percentage of request): 200% (200m) / 55%Min replicas: 1Max replicas: 10Conditions: Type Status Reason Message-AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 4 ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) ScalingLimited False DesiredWithinRange the desired count is within the acceptable rangeEvents: Type Reason Age From Message-Normal SuccessfulRescale 18s horizontal-pod-autoscaler New size: 4 Reason: cpu resource utilization (percentage of request) above target [root@k8s-master01 ~] # kubectl get podNAME READY STATUS RESTARTS AGEmyapp-deploy-5db497dbfb-5n885 1 Running 0 26smyapp-deploy-5db497dbfb-h7zcb 1 40mmyapp-deploy-5db497dbfb-z2tqd 1 Running 0 40mmyapp-deploy-5db497dbfb-z2tqd 1 Running 0 26smyapp-deploy-5db497dbfb-zkjhw 1 Running 026s [root@k8s-master01 ~] # kubectl describe hpa Name: myapp-hpa-v2Namespace: defaultLabels: Annotations: Kubectl.kubernetes.io/last-applied-configuration= {"apiVersion": "autoscaling/v2beta1" "kind": "HorizontalPodAutoscaler", "metadata": {"annotations": {}, "name": "myapp-hpa-v2", "namespace": "default"}, "spec": {... CreationTimestamp: Sun 16 Dec 2018 21:07:25 + 0800Reference: Deployment/myapp-deployMetrics: (current / target) resource memory on pods: 1765376 / 100Mi resource cpu on pods (as a percentage of request): 0 / 55%Min replicas: 1Max replicas: 10Conditions: Type Status Reason Message-AbleToScale False BackoffBoth the time since the previous scale is still within both the Downscale and upscale forbidden windows ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable rangeEvents: Type Reason Age From Message-Normal SuccessfulRescale 6m horizontal-pod-autoscaler New size: 4 Reason: cpu resource utilization (percentage of request) above target Normal SuccessfulRescale 34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target [root@k8s-master01 ~] # kubectl get podNAME READY STATUS RESTARTS AGEmyapp-deploy-5db497dbfb-h7zcb 1 Running 0 46m

3. Use v2beat1 to test custom options

[root@k8s-master01 pod-dir] # cat.. / deploy-demon-metrics.yamlapiVersion: v1kind: Servicemetadata: myapp namespace: defaultspec: selector: app: myapp type: NodePort ports:-name: http port: 80 targetPort: 80 nodePort: 32222---apiVersion: apps/v1kind: Deploymentmetadata: name: myapp-deployspec: replicas: 2 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec : containers:-name: myapp image: ikubernetes/metrics-app # Test Image ports:-name: httpd containerPort: 80 [root@k8s-master01 pod-dir] # kubectl apply-f deploy-demon-metrics.yaml [root@k8s-master01 pod-dir] # cat hpa-custom.yaml apiVersion: autoscaling/v2beta1kind: HorizontalPodAutoscalermetadata: name: myapp-hpa-v2spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment Name: myapp-deploy minReplicas: 1 maxReplicas: 10 metrics:-type: Pods # Note Type pods: metricName: http_requests # the custom parameter targetAverageValue: 800m # m in the container indicates the number [root@k8s-master01 pod-dir] # kubectl apply-f hpa-custom.yaml [root@k8s-master01 pod-dir] # kubectl describe hpa myapp-hpa-v2Name: myapp-hpa-v2Namespace: defaultLabels: Annotations: kubectl.kubernetes.io/last-applied-configuration= {"apiVersion": "autoscaling/v2beta1", "ks": {} "name": "myapp-hpa-v2", "namespace": "default"}, "spec": {... CreationTimestamp: Sun 16 Dec 2018 22:09:32 + 0800Reference: Deployment/myapp-deployMetrics: (current / target) "http_requests" on pods: / 800mMin replicas: 1Max replicas: 10Events: [root@k8s-master01 pod-dir] # kubectl get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEmyapp-hpa-v2 Deployment/myapp-deploy / 800m 1 1025m finish reading the above content Do you have any further understanding of Kubernetes's resource indicator API and what the custom indicator API is? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 232

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report