In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article is to share with you the content of the sample analysis of the resource indicator API and the custom indicator API in docker. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
In the past, heapster was used to collect resource indicators, but now heapster is going to be abandoned.
Starting from k8s v1.8, a new function has been introduced, that is, resource indicators have been introduced into api.
Resource indicator: metrics-server
Custom Metrics: prometheus,k8s-prometheus-adapter
Therefore, the new generation architecture:
1) Core metrics pipeline: composed of kubelet, metrics-server and api provided by API server; cpu cumulative utilization, real-time memory utilization, pod resource utilization and container disk utilization
2) Monitoring pipeline: used to collect various metrics data from the system and provide end users, storage systems and HPA, which contain core metrics and many non-core metrics. Non-core indicators can not be parsed by K8s.
Metrics-server is an api server that only collects cpu utilization, memory utilization, and so on.
[root@master ~] # kubectl api-versionsadmissionregistration.k8s.io/v1beta1apiextensions.k8s.io/v1beta1apiregistration.k8s.io/v1apiregistration.k8s.io/v1beta1apps/v1apps/v1beta1apps/v1beta2authentication.k8s.io/v1authentication.k8s.io/v1beta1authorization.k8s.io/v1 Resource indicator (metrics)
Visit https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
Download the file to the local directory, note, be sure to download it in the same directory as your own K8s cluster version, for example, my K8s is v1.11.2. Otherwise, the pod of metrics will not run after installation.
[root@master metrics-server] # cd kubernetes-1.11.2/cluster/addons/metrics-server [root@master metrics-server] # lsauth-delegator.yaml metrics-apiservice.yaml metrics-server-service.yamlauth-reader.yaml metrics-server-deployment.yaml resource-reader.yaml
Note: areas that need to be modified:
Metrics-server-deployment.yaml#-source=kubernetes.summary_api:''- source=kubernetes.summary_api: https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true resource-reader.yaml resources:-pods-nodes-namespaces-nodes/stats # add [root@master metrics-server] # kubectl apply-f. / clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader Createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdserviceaccount/metrics-server createdconfigmap/metrics-server-config createddeployment.extensions/metrics-server-v0.3.1 createdservice/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created [root@master metrics-server] # kubectl get pods-n kube-system-o wideNAME READY STATUS RESTARTS AGE IP NODEmetrics-server-v0.2.1-fd596d746-c7x6q 2/2 Running 0 1m 10.244.2.49 node2 [root@master metrics-server] # kubectl api-versionsmetrics.k8s.io/v1beta1
I see metrics in api-version.
[root@master ~] # kubectl proxy-- port=8080Starting to serve on 127.0.0.1 kubectl proxy [root@master ~] # curl http://localhost:8080/apis/metrics.k8s.io/v1beta1{ "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "metrics.k8s.io/v1beta1", "resources": [{"name": "nodes", "singularName": "", "namespaced": false "kind": "NodeMetrics", "verbs": ["get", "list"]}, {"name": "pods", "singularName": "", "namespaced": true, "kind": "PodMetrics", "verbs": ["get" "list"]}] [root@master metrics-server] # curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods{ "kind": "PodMetricsList", "apiVersion": "metrics.k8s.io/v1beta1", "metadata": {"selfLink": "/ apis/metrics.k8s.io/v1beta1/pods"} Items: [{"metadata": {"name": "pod1", "namespace": "dev", "selfLink": "/ apis/metrics.k8s.io/v1beta1/namespaces/dev/pods/pod1", "creationTimestamp": "2018-10-15T09:26:57Z"}, "timestamp": "2018-10-15T09:26:00Z", "window": "1m0s" "containers": [{"name": "myapp", "usage": {"cpu": "0", "memory": "2940Ki"}]}, {"metadata": {"name": "rook-ceph-osd-0-b9b94dc6c-ffs8z" "namespace": "rook-ceph", "selfLink": "/ apis/metrics.k8s.io/v1beta1/namespaces/rook-ceph/pods/rook-ceph-osd-0-b9b94dc6c-ffs8z", "creationTimestamp": "2018-10-15T09:26:57Z"}, "timestamp": "2018-10-15T09:26:00Z", "window": "1m0s" "containers": [{[root@master metrics-server] # curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes{ "kind": "NodeMetricsList", "apiVersion": "metrics.k8s.io/v1beta1", "metadata": {"selfLink": "/ apis/metrics.k8s.io/v1beta1/nodes"} "items": [{"metadata": {"name": "node2", "selfLink": "/ apis/metrics.k8s.io/v1beta1/nodes/node2", "creationTimestamp": "2018-10-15T09:27:26Z"}, "timestamp": "2018-10-15T09:27:00Z", "window": "1m0s" "usage": {"cpu": "90m", "memory": "1172044Ki"}, {"metadata": {"name": "master", "selfLink": "/ apis/metrics.k8s.io/v1beta1/nodes/master", "creationTimestamp": "2018-10-15T09:27:26Z"} "timestamp": "2018-10-15T09:27:00Z", "window": "1m0s", "usage": {"cpu": "186m", "memory": "1582972Ki"}, {"metadata": {"name": "node1", "selfLink": "/ apis/metrics.k8s.io/v1beta1/nodes/node1" "creationTimestamp": "2018-10-15T09:27:26Z"}, "timestamp": "2018-10-15T09:27:00Z", "window": "1m0s", "usage": {"cpu": "68m", "memory": "1079332Ki"}]} [root@master metrics-server] #
You can see that there is data in iterms, which means you can collect the resource usage of each node and pod. Note that if you can't see it, just wait a little longer. If you've been waiting for a long time and the iterm is still empty, check the log in the metrics container to see if there is any error. The way to view the log is:
[root@master metrics-server] # kubectl get pods-n kube-systemNAME READY STATUS RESTARTS AGEmetrics-server-v0.2.1-84678c956-jdtr5 2Charger 2 Running 0 14m [root@master metrics-server] # kubectl logs metrics-server-v0.2.1-84678c956-jdtr5-c metrics-server- n kube-system-8r6lzI1015 09VG 26VG 57.117323 1 reststorage.go: 93] No metrics for pod rook-ceph/rook-ceph-osd-prepare-node1-8r6lzI1015 09 reststorage.go:93 26 reststorage.go:93 57.117336 1 reststorage.go:140] No metrics for container rook-ceph-osd in pod rook-ceph/rook-ceph-osd-prepare-node2-vnr97I1015 0915 17336 1 reststorage.go:93] No metrics for pod rook-ceph/rook-ceph-osd-prepare-node2-vnr97
This way, the kubectl top command can be used:
[root@master ~] # kubectl top nodesNAME CPU (cores) CPU% MEMORY (bytes) MEMORY% master 131m 3 1716Mi 46% node1 68m 1% 1169Mi 31% node2 96m 2 1236Mi 33% [root@master manifests] # kubectl top pods NAME CPU (cores) MEMORY (bytes) myapp-deploy-69b47bc96d-dfpvp 0m 2Mi myapp-deploy-69b47bc96d-g9kkz 0m 2Mi [root@master manifests] # kubectl top pods-n kube-systemNAME CPU (cores) MEMORY (bytes) canal-4h3ww 11m 49Mi canal- 6tdxn 11m 49Mi canal-z2tp4 11m 43Mi coredns-78fcdf6894- 2l2cf 1m 9Mi coredns-78fcdf6894-dkkfq 1m 10Mi etcd-master 14m 242Mi kube-apiserver-master 26m 527Mi kube-controller-manager-master 20m 68Mi kube-flannel-ds-amd64- 6zqzr 2m 15Mi kube-flannel-ds-amd64- 7qtcl 2m 17Mi kube-flannel-ds-amd64- Kpctn 2m 18Mi kube-proxy-9snbs 2m 16Mi kube-proxy-psmxj 2m 18Mi kube-proxy-tc8g6 2m 17Mi kube-scheduler-master 6m 16Mi kubernetes-dashboard-767dc7d4d-4mq9z 0m 12Mi metrics-server-v0.2.1-84678c956-jdtr5 0m 29Mi Custom indicator (prometheus)
As you can see, our metrics is working properly. However, metrics can only monitor cpu and memory, but metrics cannot monitor other metrics such as user-defined monitoring metrics. At this point, you need another component called prometheus.
The deployment of prometheus is very troublesome.
Node_exporter is agent.
PromQL is equivalent to a sql statement to query data
K8s-prometheus-adapter:prometheus cannot directly analyze the index of K8s, so it needs to be converted into api with the help of k8s-prometheus-adapter.
Kube-state-metrics is used to integrate data.
Let's start the deployment.
Visit https://github.com/ikubernetes/k8s-prom
[root@master pro] # git clone https://github.com/iKubernetes/k8s-prom.git
First create a namespace called prom:
[root@master k8s-prom] # kubectl apply-f namespace.yaml namespace/prom created
Deploy node_exporter:
[root@master k8s-prom] # cd node_exporter/ [root @ master node_exporter] # lsnode-exporter-ds.yaml node-exporter-svc.yaml [root@master node_exporter] # kubectl apply-f .daemonset.apps / prometheus-node-exporter createdservice/prometheus-node-exporter created [root@master node_exporter] # kubectl get pods-n promNAME READY STATUS RESTARTS AGEprometheus-node-exporter-dmmjj 1 Running 0 7mprometheus-node-exporter-ghz2l 1/1 Running 0 7mprometheus-node-exporter-zt2lw 1/1 Running 0 7m
Deploy prometheus:
[root@master k8s-prom] # cd prometheus/ [root@master prometheus] # lsprometheus-cfg.yaml prometheus-deploy.yaml prometheus-rbac.yaml prometheus-svc.yaml [root@master prometheus] # kubectl apply-f conf igmap/prometheus-config createddeployment.apps/prometheus-server createdclusterrole.rbac.authorization.k8s.io/prometheus createdserviceaccount/prometheus createdclusterrolebinding.rbac.authorization.k8s.io/prometheus createdservice/prometheus created
Look at all the resources in the prom namespace:
[root@master prometheus] # kubectl get all-n promNAME READY STATUS RESTARTS AGEpod/prometheus-node-exporter-dmmjj 1 10mpod/prometheus-node-exporter-ghz2l 1 Running 0 10mpod/prometheus-node-exporter-ghz2l 1 Running 0 10mpod/prometheus-node-exporter-zt2lw 1 Running 0 10mpod/prometheus-server-65f5d59585 -6l8m8 1 Running 0 55sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/prometheus NodePort 10.111.127.64 9090:30090/TCP 56sservice/prometheus-node-exporter ClusterIP None 9100/TCP 10mNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset.apps/prometheus-node-exporter 3 3 3 10mNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment.apps/prometheus-server 1 1 1 1 56sNAME DESIRED CURRENT READY AGEreplicaset.apps/prometheus-server-65f5d59585 1 1 1 56s
As we can see above, the applications in the prometheus container can be accessed through port 30090 of the host through NodePorts.
It's best to mount a pvc storage, or the monitoring data will be gone after a while.
Deploy kube-state-metrics to consolidate data:
[root@master k8s-prom] # cd kube-state-metrics/ [root@master kube-state-metrics] # lskube-state-metrics-deploy.yaml kube-state-metrics-rbac.yaml kube-state-metrics-svc.yaml [root@master kube-state-metrics] # kubectl apply-f .deployment.apps / kube-state-metrics createdserviceaccount/kube-state-metrics createdclusterrole.rbac.authorization.k8s.io/kube-state-metrics createdclusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics createdservice/kube- State-metrics created [root@master kube-state-metrics] # kubectl get all-n promNAME READY STATUS RESTARTS AGEpod/kube-state-metrics-58dffdf67d-v9klh 1 Running 0 14mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/kube-state-metrics ClusterIP 10.111.41.139 8080/TCP 14m
Deploy k8s-prometheus-adapter, which requires a homemade certificate:
[root@master k8s-prometheus-adapter] # cd / etc/kubernetes/pki/ [root@master pki] # (umask 077 Openssl genrsa-out serving.key 2048) Generating RSA private key, 2048 bit long modulus.+++.+++e is 65537 (0x10001)
Certificate request:
[root@master pki] # openssl req-new-key serving.key-out serving.csr-subj "/ CN=serving"
Start visa:
[root@master pki] # openssl x509-req-in serving.csr-CA. / ca.crt-CAkey. / ca.key-CAcreateserial-out serving.crt-days 3650Signature oksubject=/CN=servingGetting CA Private Key
Create an encrypted profile:
[root@master pki] # kubectl create secret generic cm-adapter-serving-certs-- from-file=serving.crt=./serving.crt-- from-file=serving.key=./serving.key-n promsecret/cm-adapter-serving-certs created
Note: cm-adapter-serving-certs is the name in the custom-metrics-apiserver-deployment.yaml file.
[root@master pki] # kubectl get secrets-n promNAME TYPE DATA AGEcm-adapter-serving-certs Opaque 2 51sdefault-token-knsbg kubernetes.io/service-account-token 3 4hkube-state-metrics-token-sccdf kubernetes.io/service -account-token 3 3hprometheus-token-nqzbz kubernetes.io/service-account-token 3 3h
Deploy k8s-prometheus-adapter:
[root@master k8s-prom] # cd k8s-prometheus-adapter/ [root@master k8s-prometheus-adapter] # lscustom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml custom-metrics-apiserver-service.yamlcustom-metrics-apiserver-auth-reader-role-binding.yaml custom-metrics-apiservice.yamlcustom-metrics-apiserver-deployment.yaml custom-metrics-cluster-role.yamlcustom-metrics-apiserver-resource-reader- Cluster-role-binding.yaml custom-metrics-resource-reader-cluster-role.yamlcustom-metrics-apiserver-service-account.yaml hpa-custom-metrics-cluster-role-binding.yaml
As k8s v1.11.2 is not compatible with the latest version of k8s-prometheus-adapter, the solution is to visit https://github.com/DirectXMan12/k8s-prometheus-adapter/tree/master/deploy/manifests to download the latest version of the custom-metrics-apiserver-deployment.yaml file, and change the name of the namespace to prom; while downloading the custom-metrics-config-map.yaml file locally, and changing the name of the namespace to prom.
[root@master k8s-prometheus-adapter] # kubectl apply-f. Clusterrolebinding.rbac.authorization.k8s.io / custom-metrics:system:auth-delegator createdrolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader createddeployment.apps/custom-metrics-apiserver createdclusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader createdserviceaccount/custom-metrics-apiserver createdservice/custom-metrics-apiserver createdapiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io createdclusterrole.rbac.authorization.k8s.io/custom -metrics-server-resources createdclusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader createdclusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created [root@master k8s-prometheus-adapter] # kubectl get all-n promNAME READY STATUS RESTARTS AGEpod/custom-metrics-apiserver-65f545496-64lsz 1 Running 0 6mpod/kube-state-metrics-58dffdf67d -v9klh 1 + 1 Running 0 4hpod/prometheus-node-exporter-dmmjj 1 + 1 + 1 Running 0 4hpod/prometheus-node-exporter-ghz2l 1 + + 1 Running 0 4hpod/prometheus-node-exporter-zt2lw-1 4hpod/prometheus-node-exporter-zt2lw + 1 Running 0 4hpod/prometheus-server-65f5d59585-6l8m8 1 + Running 0 4hNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/custom-metrics-apiserver ClusterIP 10.103.87.246 443/TCP 36mservice/kube-state-metrics ClusterIP 10.111.41.139 8080/TCP 4hservice/prometheus NodePort 10.111.127.64 9090:30090/TCP 4hservice/prometheus-node-exporter ClusterIP None 9100/TCP 4hNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset.apps/prometheus-node-exporter 3 3 3 4hNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment.apps/custom-metrics-apiserver 1 1 1 36mdeployment.apps/kube-state-metrics 1 1 1 4hdeployment.apps/prometheus-server 1 1 1 1 4hNAME DESIRED CURRENT READY AGEreplicaset.apps/custom-metrics-apiserver-5f6b4d857d 0 0 0 36mreplicaset.apps/custom-metrics-apiserver-65f545496 1 1 1 6mreplicaset.apps/custom-metrics-apiserver-86ccf774d5 0 0 0 17mreplicaset.apps/kube-state-metrics-58dffdf67d 1 1 1 4hreplicaset.apps/prometheus-server-65f5d59585 1 1 1 4h
Finally, you see that all the resources in the prom namespace are in running state.
[root@master k8s-prometheus-adapter] # kubectl api-versionscustom.metrics.k8s.io/v1beta1
You can see the api of custom.metrics.k8s.io/v1beta1.
Open an agent:
[root@master k8s-prometheus-adapter] # kubectl proxy-- port=8080
You can see the metric data:
[root@master pki] # curl http://localhost:8080/apis/custom.metrics.k8s.io/v1beta1/ {"name": "pods/ceph_rocksdb_submit_transaction_sync", "singularName": "", "namespaced": true, "kind": "MetricValueList", "verbs": ["get"]}, {"name": "jobs.batch/kube_deployment_created" "singularName": "," namespaced ": true," kind ":" MetricValueList "," verbs ": [" get "]}, {" name ":" jobs.batch/kube_pod_owner "," singularName ":"," namespaced ": true," kind ":" MetricValueList " "verbs": ["get"]}
Now we can happily create HPA (horizontal Pod auto-scaling).
In addition, prometheus can be integrated with grafana. Here are the steps.
First download the file grafana.yaml and visit https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/grafana.yaml
[root@master pro] # wget
Modify the contents of the grafana.yaml file:
There are two ways to change namespace: kube-system to prom. Comment out the following two words in env:-name: INFLUXDB_HOST value: monitoring-influxdb add type: NodePort ports:-port: 80 targetPort: 3000 selector: k8s-app: grafana type: NodePort [root@master pro] # kubectl apply-f grafana.yaml deployment.extensions/monitoring-grafana createdservice/monitoring-grafana created [root@master pro] # kubectl get pods-n promNAME READY STATUS RESTARTS AGEmonitoring-grafana-ffb4d59bd-gdbsk 1/1 Running 0 5s
See the pod of grafana running.
[root@master pro] # kubectl get svc-n promNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEmonitoring-grafana NodePort 10.106.164.205 80:32659/TCP 19m
We can visit the host ip: http://172.16.1.100:32659
Then, you can see the corresponding data from the interface.
Log in to the following website to download a template for grafana monitoring k8s-prometheus:
Then import the template downloaded above in the interface of grafana:
After importing the template, you can see the monitoring data:
HPA (horizontal pod automatic extension)
When the pod pressure is high, the number of Pod will be automatically expanded according to the load to uniform pressure.
Currently, HPA only supports two versions, while v1 version only supports the definition of core metrics (pod can only be extended according to cpu utilization metrics)
[root@master pro] # kubectl explain hpa.spec.scaleTargetRefscaleTargetRef: the standard based on which metrics are used to calculate pod scaling [root@master pro] # kubectl api-versions | grep autoautoscaling/v1autoscaling/v2beta1
As you can see above, hpav1 and hpav2 are supported, respectively.
Let's recreate a pod myapp with resource constraints on the command line:
[root@master] # kubectl run myapp--image=ikubernetes/myapp:v1-- replicas=1-- requests='cpu=50m,memory=256Mi'-- limits='cpu=50m,memory=256Mi'-- labels='app=myapp'-- expose-- port=80service/myapp createddeployment.apps/myapp created [root@master ~] # kubectl get podsNAME READY STATUS RESTARTS AGEmyapp-6985749785-fcvwn 1 Running 0 58s
Next we let myapp this pod can automatically expand horizontally, with kubectl autoscale, in fact, is to specify the HPA controller.
[root@master] # kubectl autoscale deployment myapp-min=1-max=8-cpu-percent=60horizontalpodautoscaler.autoscaling/myapp autoscaled
-- min: indicates the minimum number of extended pod
-- max: indicates the maximum number of extended pod
-- cpu-percent:cpu utilization
[root@master ~] # kubectl get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEmyapp Deployment/myapp 0% AGEmyapp ClusterIP 60% 18 14 m [root@master] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEmyapp ClusterIP 10.105.235.197 80/TCP 19
Here's how we change service to NodePort:
[root@master ~] # kubectl patch svc myapp-p'{"spec": {"type": "NodePort"} 'service/myapp patched [root@master ~] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEmyapp NodePort 10.105.235.197 80:31990/TCP 22m [root@master ~] # yum install httpd-tools # mainly to install Ab stress testing tool [root@master ~] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEmyapp-6985749785-fcvwn 1 Running 0 25m 10.244.2.84 node2
Start the stress test with the ab tool
[root@master] # ab-c 1000-n 5000000 http://172.16.1.100:31990/index.htmlThis is ApacheBench, Version 2.3 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 172.16.1.100 (be patient)
Wait a little longer and you will see that the cpu utilization of pods is 98%, which needs to be expanded to 2 pod:
[root@master ~] # kubectl describe hparesource cpu on pods (as a percentage of request): 98% (49m) / 60%Deployment pods: 1 current / 2 desired [root@master ~] # kubectl top podsNAME CPU (cores) MEMORY (bytes) myapp-6985749785-fcvwn 49m (the total cpu we set is 50m) 3Mi [root@master ~] # Kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEmyapp-6985749785-fcvwn 1 Running 0 32m 10.244.2.84 node2myapp-6985749785-sr4qv 1 Running 0 2m 10.244.1.105 node1
Above we see that it has automatically expanded to 2 pod. Wait a moment, and as the cpu pressure increases, you will see 4 or more pod automatically expanded:
[root@master] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEmyapp-6985749785-2mjrd 1 Running 0 1m 10.244.1.107 node1myapp-6985749785-bgz6p 1 Running 0 1m 10.244.1.108 node1myapp-6985749785-fcvwn 1 Running 0 35m 10.244.2.84 node2myapp-6985749785-sr4qv 1/1 Running 0 5m 10.244.1.105 node1
As soon as the isobaric test stops, the number of pod will shrink to the normal number.
What we use above is the horizontal pod auto-extension function that hpav1 does, and as we said earlier, the hpav1 version can only automatically extend pod according to the cpu utilization rate.
Let's take a look at the functionality of hpa v2, which scales pod horizontally based on custom metric utilization.
Before using the hpa v2 version, we deleted the previously created hpa v1 version to avoid conflicts with the hpa v2 version we tested:
[root@master hpa] # kubectl delete hpa myapphorizontalpodautoscaler.autoscaling "myapp" deleted
All right, let's create a hpa v2:
[root@master hpa] # cat hpa-v2-demo.yaml apiVersion: autoscaling/v2beta1 # from this you can see that it is the hpa v2 version kind: HorizontalPodAutoscalermetadata: name: myapp-hpa-v2spec: scaleTargetRef: # on which metrics to evaluate pressure apiVersion: apps/v1 # on whom to automatically extend kind: Deployment name: myapp minReplicas: 1 # minimum number of copies maxReplicas: 10 metrics: # indicates which metrics to evaluate -type: Resource # indicates evaluation based on resources: resource: name: cpu targetAverageUtilization: 55 # indicates that pod cpu usage exceeds 55% In terms of automatically scaling the number of pod-type: Resource resource: name: memory # we know that the hpa v1 version can only be evaluated based on cpu, and when it comes to our hpa v2 version, it can be evaluated based on memory. TargetAverageValue: 50Mi # indicates that the pod memory usage exceeds 50m. Automatically expand the number of pod horizontally [root@master hpa] # kubectl apply-f hpa-v2-demo.yaml horizontalpodautoscaler.autoscaling/myapp-hpa-v2 created [root@master hpa] # kubectl get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEmyapp-hpa-v2 Deployment/myapp 3723264/50Mi, 0% Universe 55% 1 10 1 37s
We see that there is only one pod.
[root@master hpa] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEmyapp-6985749785-fcvwn 1 node2 1 Running 0 57m 10.244.2.84
Start the pressure test:
[root@master] # ab-c 100-n 5000000 http://172.16.1.100:31990/index.html
Look at the detection of hpa v2:
[root@master hpa] # kubectl describe hpaMetrics: (current / target) resource memory on pods: 3756032 / 50Mi resource cpu on pods (as a percentage of request): 82% (41m) / 55%Min replicas: 1Max replicas: 10Deployment pods: 1 current / 2 desired [root@master hpa] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEmyapp-6985749785-8frq4 1 Running 0 1m 10.244.1.109 node1myapp-6985749785-fcvwn 1/1 Running 0 1h 10.244.2.84 node2
See that 2 Pod are automatically extended. As soon as the isobaric test stops, the number of pod will shrink to the normal number.
In the future, we can not only use hpa v2 to scale the number of Pod according to cpu and memory utilization, but also according to http concurrency and so on.
Such as the following:
[root@master hpa] # cat hpa-v2-custom.yaml apiVersion: autoscaling/v2beta1 # from this you can see that it is the hpa v2 version kind: HorizontalPodAutoscalermetadata: name: myapp-hpa-v2spec: scaleTargetRef: # on which metrics to evaluate pressure apiVersion: apps/v1 # on whom to automatically extend kind: Deployment name: myapp minReplicas: 1 # minimum number of copies maxReplicas: 10 metrics: # indicates which metrics to evaluate -type: Pods # indicates the number of resources evaluated based on resources pods: metricName: http_requests# custom resource indicator targetAverageValue: 800m # m I would like to express the concurrent number of 800 thank you for reading! On the "docker resource indicators API and custom indicators API example analysis" this article is shared here, I hope the above content can be of some help to you, so that you can learn more knowledge, if you think the article is good, you can share it out for more people to see it!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 257
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.