In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
If we manually monitor the pods and manually adjust the copy, then the workload is undoubtedly huge, but kubernetes already has a mechanism to deal with it.
# HPA full name Horizontal Pod Autoscaler controller workflow (V1 version)
For a more detailed introduction, refer to the official document Horizontal Pod Autoscaler
Process flow
Create a HPA resource object, associate the corresponding resource such as Deployment, and set the target CPU utilization threshold, the maximum and minimum number of replica.
Premise: pod must set a resource limit before the parameter request,HPA will work. The HPA controller obtains the resource usage information by observing the metrics value every 15 seconds (the-horizontal-pod-autoscaler-sync-period parameter of controller manager can be set by default 15s). The HPA controller compares the resource usage information with the HPA setting value, and calculates the number of copies that need to be adjusted to adjust the number of copies according to the calculation results, so that the CPU utilization of a single POD can approach the expected value as far as possible, but can not take care of the maximum set. Minimum value. The above 2p3 and 4 cycle HPA controllers observe resource utilization and make decisions periodically, and execution takes time. During the execution of automatic scaling, metrics is not static, but may decrease or increase. If it is executed too frequently, it may lead to rapid jitter in the use of resources, so the controller does not make new decisions within a period of time after each decision. For capacity expansion, the time is 3 minutes, and the capacity reduction is 5 minutes. The corresponding adjustment parameter-horizontal-pod-autoscaler-downscale-delay--horizontal-pod-autoscaler-upscale-delay automatic scaling is not in place at once, but gradually approaches the calculated value. Each adjustment is no more than 2 times the current number of copies or 1x2.
This record verifies the function of kubernetes HPA. Refer to the official kubernetes document and use the image php-apache provided by the official document for testing. Metrics server
Kubernetes cluster needs to be configured with metrics server. Refer to the documentation for configuring Kubernetes to deploy metrics-server.
# configure HPA to scale out the application
Configure Startup deployment:php-apachecat hpa-deployment.ymal
ApiVersion: apps/v1
Kind: Deployment
Metadata:
Name: php-apache
Labels:
App: hpa-test
Spec:
Replicas: 1
Selector:
MatchLabels:
Name: php-apache
App: hpa-test
Template:
Metadata:
Labels:
Name: php-apache
App: hpa-test
Spec:
Containers:
Name: php-apache
Image: mirrorgooglecontainers/hpa-example:latest
Ports:containerPort: 80
Name: http
Protocol: TCP
Resources:
Requests:
Cpu: 0.005
Memory: 64Mi
Limits:
Cpu: 0.05
Memory: 128Mi2. Configure service: php-apache-svc
Cat hpa-svc.yaml
ApiVersion: v1
Kind: Service
Metadata:
Name: php-apache-svc
Labels:
App: hpa-test
Spec:
Selector:
Name: php-apache
App: hpa-test
Ports:
Name: http
Port: 80
Protocol: TCP3. Configure hpa:php-apache-hpa
Cat hpa-hpa.yaml
ApiVersion: autoscaling/v1
Kind: HorizontalPodAutoscaler
Metadata:
Name: php-apache-hpa
Labels:
App: hpa-test
Spec:
ScaleTargetRef:
ApiVersion: apps/v1
Kind: Deployment
Name: php-apache
MinReplicas: 1
MaxReplicas: 10
TargetCPUUtilizationPercentage: 50
4. Start deployment,service,hpa and verify kubectl apply-f. /
Deployment.apps/php-apache configured
Service/php-apache-svc unchanged
Horizontalpodautoscaler.autoscaling/php-apache-hpa unchanged
Kubectl get all
NAME READY STATUS RESTARTS AGE
Pod/php-apache-6b9f498dc4-vwlfr 1/1 Running 0 3h24m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
Service/kubernetes ClusterIP 10.96.0.1 443/TCP 7d20h
Service/php-apache-svc ClusterIP 10.104.34.168 80/TCP 3h24m
NAME READY UP-TO-DATE AVAILABLE AGE
Deployment.apps/php-apache 1/1 1 1 3h24m
NAME DESIRED CURRENT READY AGE
Replicaset.apps/php-apache-6b9f498dc4 1 1 1 3h24m
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Horizontalpodautoscaler.autoscaling/php-apache-hpa Deployment/php-apache 20% 50% 1 10 1 3h24m
# stress test, observe the effect of HPA > 1. Generate a stress test client for continuous stress testing
Kubectl run--generator=run-pod/v1-I-tty load-generator-- image=busybox / bin/sh
While true; do wget-Q-O-http://php-apache-svc.default.svc.cluster.local; done
OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!
> 2. Pressure test, observe the results
Kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Php-apache-hpa Deployment/php-apache 800% 50% 1 10 1 27m
Kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Php-apache-hpa Deployment/php-apache 1000% 50% 1 10 2 27m
Kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Php-apache-hpa Deployment/php-apache 1000% 50% 1 10 4 27m
Kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Php-apache-hpa Deployment/php-apache 1000% 50% 1 10 8 27m
Kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Php-apache-hpa Deployment/php-apache 120% 50% 50% 1 10 10 27m
Kubectl get deployment php-apache
NAME READY UP-TO-DATE AVAILABLE AGE
Php-apache 10/10 10 10 28m
# conclusion: with the stress test, the CPU utilization of pod under deployment increases, exceeding the percentage set by HPA by 50%, and then doubles the capacity of replicaset one by one. Reach the upper limit and stop the expansion. The request QoS set according to replicaset gradually stabilizes the utilization of resources. > 3. Stop the pressure test
While true; do wget-Q-O-http://php-apache-svc.default.svc.cluster.local; done
Can't connect to remote host (10.104.63.73): Connection refused
OK. OK! ^ C
/ # exit
/ # Session ended, resume using 'kubectl attach load-generator-c load-generator-I-t' command when the pod is running
# CPU usage is restored to the initial value. 20% of CPU controller will observe periodically and scale down to the minimum value one by one.
Kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Php-apache-hpa Deployment/php-apache 20% 50% 1 10 10 36m
# after waiting for a few minutes (default is 5 minutes), the reason:
Kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Php-apache-hpa Deployment/php-apache 20% 50% 1 10 4 41m
# wait a few minutes again (default 5 minutes)
Kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Php-apache-hpa Deployment/php-apache 20% 50% 1 10 2 46m
# wait for a few minutes again (default is 5 minutes) and stabilize at the minimum number of copies
Kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
Php-apache-hpa Deployment/php-apache 20% 50% 1 10 1 53m
# other tests above verify the function of HPA, and the version of API used is autoscaling/v1. You can see that there are three versions through kubectl api-versions. The v1 version supports only the CPU,v2beta2 version supports multiple metrics (CPU,memory) and custom metrics. Writing method of hpa yaml file based on autoscaling/v2beta2
ApiVersion: autoscaling/v2beta2
Kind: HorizontalPodAutoscaler
Metadata:
Name: php-apache-hpa
Labels:
App: hpa-test
Spec:
ScaleTargetRef:
ApiVersion: apps/v1
Kind: Deployment
Name: php-apache
MinReplicas: 1
MaxReplicas: 10
Metrics:
Type: Resource
Resource:
Name: cpu
Target:
Type: Utilization
AverageUtilization: 50
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 292
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.