In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article will explain in detail how to understand Kubernetes resource restriction CPU, the content of the article is of high quality, so the editor will share it for you as a reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.
The difference between Requests and Limits, where Requests is used to tell the scheduler how many resources Pod needs to schedule, and Limits is used to tell the Linux kernel when your process can be killed to clean up space. I will continue to analyze CPU resource limitations in detail.
1. CPU restrictions
As I mentioned in the previous article, CPU resource limitations are more complex than memory resource limitations, for reasons described in more detail below. Fortunately, CPU resource limits are controlled by cgroup as well as memory resource limits, and the ideas and tools mentioned above also apply here, and we just need to pay attention to their differences. First, let's add the CPU resource limit to the yaml in the previous example:
Resources: requests: memory: 50Mi cpu: 50m limits: memory: 100Mi cpu: 100m
Copy
The unit suffix m means 1/1000 cores, that is, 1 Core = 1000m. Therefore, the resource object specifies that the container process needs 50 to 1000 cores (5%) to be scheduled, and allows a maximum of 100 prime 1000 cores (10%) to be used. Similarly, 2000m represents two complete CPU cores, which you can write as 2 or 2.0. To understand how Docker and cgroup use these values to control the container, let's first create a Pod with only CPU requests configured:
$kubectl run limit-test-image=busybox-requests "cpu=50m"-command-/ bin/sh-c "while true; do sleep 2; done" deployment.apps "limit-test" created
Copy
We can verify that the Pod is configured with 50m CPU requests through the kubectl command:
$kubectl get pods limit-test-5b4c495556-p2xkr-oclinic jsonpathways'{.spec.containers [0] .resources} 'mapping [maps: map [CPU: 50m]]
Copy
We can also see that Docker configures the same resource limits for the container:
$docker ps | grep busy | cut-d''- f1f2321226620e $docker inspect f2321226620e-- format'{{.HostConfig.CpuShares}}'51
Copy
Why is 51 shown here instead of 50? This is because both Linux cgroup and Docker divide the number of CPU cores into 1024 shares, while Kubernetes divides it into 1000 shares.
Shares is used to set the relative value of CPU, and is for all CPU (kernel). The default value is 1024. If there are two cgroup in the system, the shares value of An and BMagie An is 1024, and the shares value of B is 512, then A will get 1024 / (1204) 512 = 66% of CPU resources, while B will get 33% of CPU resources. Shares has two characteristics:
If An is not busy and does not use 66% of the CPU time, then the remaining CPU time will be assigned to B by the system, that is, B's CPU utilization can exceed 33%.
If a new cgroup C is added and its shares value is 1024, then the limit of A becomes 1024 / (1204 512 1024) = 40% and the limit of B becomes 20%.
From the above two characteristics, we can see:
Shares basically doesn't work in free time, but only when CPU is busy, which is an advantage.
Because shares is an absolute value, you need to compare with other cgroup values to get your own relative limit, and on a machine that deploys a lot of containers, the number of cgroup varies, so this limit also changes. You set a high value by yourself, but others may set a higher value, so this function cannot accurately control CPU usage.
In the same way that Docker configures the memory cgroup of the container process when configuring the memory resource limit, Docker configures the cpu,cpuacct cgroup of the container process when setting the CPU resource limit:
$ps ax | grep / bin/sh 60554? Ss 0:00 / bin/sh-c while true; do sleep 2 Done$ sudo cat / proc/60554/cgroup...4:cpu,cpuacct:/kubepods/burstable/pode12b33b1-db07-11e8-b1e1-42010a800070/3be263e7a8372b12d2f8f8f9b4251f110b79c2a3bb9e6857b2f1473e640e8e75 $ls-l / sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pode12b33b1-db07-11e8-b1e1-42010a800070/3be263e7a8372b12d2f8f8f9b4251f110b79c2a3bb9e6857b2f1473e640e8e75total 0drwxr-xr-x 2 root root 0 Oct 28 23:19. Drwxr-xr-x 4 root root 0 Oct 28 23:19.-rw-r--r-- 1 root root 0 Oct 28 23:19 cpu.shares
Copy
The HostConfig.CpuShares attribute of the Docker container is mapped to the cpu.shares attribute of cgroup, which can be verified:
$sudo cat / sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podb5c03ddf-db10-11e8-b1e1-42010a800070/64b5f1b636dafe6635ddd321c5b36854a8add51931c7117025a694281fb11444/cpu.shares51
Copy
You may be surprised that setting CPU requests propagates the value to cgroup, while in the previous article we did not propagate the value to cgroup when we set up memory requests. This is because the soft limit kernel feature of memory does not work for Kubernetes, while setting cpu.shares is useful for Kubernetes. I will discuss in detail why this is the case later. Now let's look at what happens when you set up CPU limits:
Kubectl run limit-test-- image=busybox-- requests "cpu=50m"-- limits "cpu=100m"-- command-- / bin/sh-c "while true; dosleep 2; done" deployment.apps "limit-test" created
Copy
Once again use kubectl to verify our resource configuration:
$kubectl get pods limit-test-5b4fb64549-qpd4n-oclinic jsonpathogens'{.spec.containers [0] .resources} 'mapping [maps: map [CPU: 100m] requests: map [CPU: 50m]]
Copy
View the configuration of the corresponding Docker container:
$docker ps | grep busy | cut-d''- f1f2321226620e$ docker inspect 472abbce32a5-- format'{{.HostConfig.CpuShares}} {{.HostConfig.CpuQuota}} {{.HostConfig.CpuPeriod}}'51 10000 100000
Copy
You can clearly see that CPU requests corresponds to the HostConfig.CpuShares property of the Docker container. CPU limits, on the other hand, is less obvious and is controlled by two properties: HostConfig.CpuPeriod and HostConfig.CpuQuota. These two attributes in the Docker container map to the other two properties of the process's cpu,couacct cgroup: cpu.cfs_period_us and cpu.cfs_quota_us. Let's take a look:
$sudo cat / sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod2f1b50b6-db13-11e8-b1e1-42010a800070/f0845c65c3073e0b7b0b95ce0c1eb27f69d12b1fe2382b50096c4b59e78cdf71/cpu.cfs_period_us100000 $sudo cat / sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod2f1b50b6-db13-11e8-b1e1-42010a800070/f0845c65c3073e0b7b0b95ce0c1eb27f69d12b1fe2382b50096c4b59e78cdf71/cpu.cfs_quota_us10000
Copy
As I said, these values are the same as those specified in the container configuration. But how do the values of these two properties come from the 100m cpu limits we set in Pod, and how do they implement the limits? This is because cpu requests and cpu limits are implemented using two separate control systems. Requests uses the cpu shares system. Cpu shares divides each CPU core into 1024 time slices and ensures that each process will receive a fixed proportion of time slices. If there are a total of 1024 time slices and each of the two processes has cpu.shares set to 512, they will each get about half of the CPU available time. However, the cpu shares system cannot precisely control the upper limit of CPU utilization. If one process does not set shares, the other process is free to use CPU resources.
Around 2010, the Google team and others noticed the problem. To solve this problem, a second more powerful control system, the CPU bandwidth control group, was later added to the linux kernel. The bandwidth control group defines a period, usually 1x10 seconds (that is, 100000 microseconds). A quota is also defined to indicate the number of CPU times that the process is allowed to use within the set cycle length, and the two files work together to set the upper limit of CPU usage. The units of both files are microseconds (us). The value of cfs_period_us ranges from 1 millisecond (ms) to 1 second (s). The value of cfs_quota_us is greater than 1ms. If the value of cfs_quota_us is-1 (default), it is not limited by CPU time.
Here are a few examples:
# 1. Limit the use of 1 CPU (CPU time of 250ms per 250ms) $echo 250000 > cpu.cfs_quota_us / * quota = 250ms * / $echo 250000 > cpu.cfs_period_us / * period = 250ms * / # 2. Limit the use of 2 CPU (kernels) (CPU time of 1000ms per 500ms, even if two cores are used) $echo 1000000 > cpu.cfs_quota_us / * quota = 1000ms * / $echo 500000 > cpu.cfs_period_us / * period = 500ms * / # 3. Limit the use of 20% of 1 CPU (CPU time of 10ms per 50ms, even with 20% of a CPU core) $echo 10000 > cpu.cfs_quota_us / * quota = 10ms * / $echo 50000 > cpu.cfs_period_us / * period = 50ms * /
Copy
In this example, we set the cpu limits of Pod to 100m, which means 100 CPU cores, or 10000 of the 100000 microsecond CPU time period. So the limits translation to cpu,cpuacct cgroup is set to cpu.cfs_period_us=100000 and cpu.cfs_quota_us=10000. By the way, cfs stands for Completely Fair Scheduler (absolute Fair scheduling), which is the default CPU scheduling algorithm in Linux systems. There is also a real-time scheduling algorithm, which also has its own corresponding quota value.
Now let's sum it up:
The cpu requests set in Kubernetes will eventually be set to the value of the cpu.shares property by cgroup, and cpu limits will be set to the value of the cpu.cfs_period_us and cpu.cfs_quota_us properties by the bandwidth control group. Like memory, cpu requests is mainly used to inform the scheduler at least how many cpu shares are needed on the scheduler node before it can be scheduled.
Unlike memory requests, setting cpu requests sets a property in cgroup to ensure that the kernel allocates that amount of shares to the process.
Cpu limits is also different from memory limits. If the container process uses more memory resources than the memory usage limit, the process will be a candidate for oom-killing. But the container process can basically never exceed the set CPU quota, so the container will never be expelled for trying to use more CPU time than is allocated. The CPU resource limit is imposed in the scheduler to ensure that the process does not exceed this limit.
What happens if you don't set these properties in the container, or set them to inaccurate values? As with memory, setting only limits but not requests,Kubernetes will set the requests of CPU to the same value as limits. It would be great if you know exactly how much CPU time your workload requires. What if only CPU requests is set but CPU limits is not set? In this case, Kubernetes will ensure that the Pod is scheduled to the appropriate node, and that the node's kernel will ensure that the available cpu shares on the node is greater than the cpu shares requested by Pod, but your process will not be prevented from using more than the requested CPU. Setting neither requests nor limits is the worst-case scenario: the scheduler does not know what the container needs, and the process has unlimited use of cpu shares, which may have some negative impact on node.
Finally, I want to tell you that it is troublesome to configure these parameters manually for each pod. Kubernetes provides LimitRange resources that allow us to configure the default request and limit values of a namespace.
two。 Default limit
You already know from the above discussion that ignoring resource restrictions can have a negative impact on Pod, so you might think that it would be nice if you could configure the default request and limit values of a namespace, so that these restrictions will be added by default every time a new Pod is created. Kubernetes allows us to set resource limits on each namespace through LimitRange resources. To create default resource limits, you need to create a LimitRange resource in the corresponding namespace. Here is an example:
ApiVersion: v1kind: LimitRangemetadata: name: default-limitspec: limits:-default: memory: 100Mi cpu: 100m defaultRequest: memory: 50Mi cpu: 50m-max: memory: 512Mi cpu: 500m-min: memory: 50Mi cpu: 50m type: Container
Copy
Here are a few fields that may confuse you. Let me open them and analyze them for you.
The default field below the limits field represents the default limits configuration for each Pod, so any Pod of an unallocated limits is automatically allocated 100Mi limits memory and 100m limits CPU.
The defaultRequest field represents the default requests configuration for each Pod, so any Pod of an unallocated requests is automatically allocated 50Mi requests memory and 50m requests CPU.
The max and min fields are special, and if these two fields are set, the Pod will not be allowed to be created as long as the limits and requests set by the Pod in this namespace exceed this upper and lower limit. I haven't found the use of these two fields yet. If you know, please let me know if you leave a message.
The default values set in LimitRange are finally implemented by the admission controller LimitRanger plug-in in Kubernetes. The admission controller consists of a series of plug-ins that modify the Spec field of the Pod before creating the Pod after the API receives the object. For the LimitRanger plug-in, it checks whether limits and requests are set for each Pod, and if not, configure it with the default values set in LimitRange. By checking the annotations comments in Pod, you can see that the LimitRanger plug-in has set the default values in your Pod. For example:
ApiVersion: v1kind: Podmetadata: annotations: kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container limit-test' name: limit-test-859d78bc65-g6657 namespace: defaultspec: containers:-args:-/ bin/sh-- c-while true; do sleep 2 Done image: busybox imagePullPolicy: Always name: limit-test resources: requests: cpu: 100m on how to understand Kubernetes resource restrictions CPU to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.