In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how to use tke-autoscaling-placeholder to achieve second-order elastic scaling. The content of the article is of high quality, so the editor will share it with you for reference. I hope you will have some understanding of the relevant knowledge after reading this article.
Background
When the TKE cluster is configured with a node pool and auto scaling is enabled, the automatic expansion of the node can be triggered when the node resources are insufficient (automatically buy machines and join the cluster). However, this expansion process takes a certain amount of time to complete. In some scenarios with sudden high traffic, the expansion speed may appear too slow, affecting the business. Tke-autoscaling-placeholder can be used to achieve second-rate scaling on TKE to cope with such scenarios with sudden high traffic.
What is the principle?
Tke-autoscaling-placeholder actually uses low-priority Pod to occupy resources in advance (pause container with request, which actually consumes little resources), and reserves some resources as buffers for some high-priority services that may have sudden high traffic. When Pod expansion is needed, high-priority Pod can quickly preempt low-priority Pod resources for scheduling, while low-priority tke-autoscaling-placeholder Pod will be "squeezed out". The status changes to Pending. If node pool is configured and auto scaling is enabled, node expansion will be triggered. In this way, with some resources as buffers, even if the node expansion is slow, it can ensure that some Pod can be quickly expanded and scheduled to achieve second-rate scaling. To adjust the amount of reserved buffer resources, you can adjust the number of request or replicas of tke-autoscaling-placeholder according to the actual demand.
What are the restrictions on use?
Cluster version 1.18 or above is required to use this application.
How to use it? Install tke-autoscaling-placeholder
Find tke-autoscaling-placeholder in the application market, click to enter the application details, and then click to create an application:
Select the cluster id and namespace to be deployed. The most important configuration parameters of the application are replicaCount and resources.request, which represent the number of copies of tke-autoscaling-placeholder and the size of resources occupied by each copy, respectively. Together, they determine the size of buffer resources, which can be estimated and set according to the amount of additional resources required by sudden high traffic.
Finally, click create, and you can check whether these resource-occupying Pod have been started successfully:
$kubectl get pod-n defaulttke-autoscaling-placeholder-b58fd9d5d-2p6ww 1 8stke-autoscaling-placeholder-b58fd9d5d-55jw7 1 Running 0 8stke-autoscaling-placeholder-b58fd9d5d-55jw7 1 Running 0 8stke-autoscaling-placeholder-b58fd9d5d-6rq9r 1 8stke-autoscaling-placeholder-b58fd9d5d-7c95t 1 Running 0 8stke-autoscaling-placeholder-b58fd9d5d-7c95t 1 Running 0 8stke-autoscaling-placeholder-b58fd9d5d-bfg8r 1 pound 1 Running 0 8 Stkemer autoscaling- Placeholder-b58fd9d5d-cfqt6 1/1 Running 0 8stke-autoscaling-placeholder-b58fd9d5d-gmfmr 1/1 Running 0 8stke-autoscaling-placeholder-b58fd9d5d-grwlh 1/1 Running 0 8stke-autoscaling-placeholder-b58fd9d5d-ph7vl 1/1 Running 0 8stke-autoscaling-placeholder-b58fd9d5d-xmrmv 1/1 Running 0 8s
The complete configuration of tke-autoscaling-placeholder is shown in the following table:
Parameter describes the number of copies of the default replicaCountplaceholder 10imageplaceholder mirror address ccr.ccs.tencentyun.com/library/pause:latestresources.requests.cpu single placeholder copy occupied cpu resource size 300mresources.requests.memory single placeholder copy occupied memory size 600MilowPriorityClass.create whether to create a low-priority PriorityClass (for use by placeholder references) truelowPriorityClass.name low-priority PriorityClass name low-prioritynodeSelector specifies that placeholder is scheduled to a node with a specific label {} tolerations refers to Set the stain to be tolerated by placeholder [] affinity specify the affinity configuration of placeholder {} deploy high-priority Pod
The priority of tke-autoscaling-placeholder is very low. Our business Pod can specify a high priority PriorityClass to facilitate preemption of resources to achieve rapid expansion. If not, you can create one first:
ApiVersion: scheduling.k8s.io/v1kind: PriorityClassmetadata: name: high-priorityvalue: 1000000globalDefault: falsedescription: "high priority class"
Designate priorityClassName as the high priority PriorityClass in our business Pod:
ApiVersion: apps/v1kind: Deploymentmetadata: name: nginxspec: replicas: 8 selector: matchLabels: app: nginx template: metadata: labels: app: nginxspec: priorityClassName: high-priority # High priority PriorityClass containers:-name: nginx image: nginx resources: requests: cpu: 400m memory: 800Mi
When the cluster node resources are insufficient, the expanded high-priority business Pod can seize and schedule the Pod resources of the low-priority tke-autoscaling-placeholder, and then Pending the Pod of the tke-autoscaling-placeholder:
$kubectl get pod-n defaultNAME READY STATUS RESTARTS AGEnginx-bf79bbc8b-5kxcw 1 Running 023s on how to use tke-autoscaling-placeholder to achieve second auto scaling is done here. I hope the above content can be of some help to you and learn more. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.