In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "how to create and apply Pod in Kubernetes 1.8". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Updates to scheduler in Kubernetes 1.8,
[Alpha] supports defining PriorityClass and assigning it to Pod to define Pod Priority
[Alpha] supports preemptive scheduling based on Pod Priority
[Alpha] Node Controller supports automatically typing the corresponding Taints to Node according to Node Condition
What is preemptive scheduling?
Before Kubernetes 1.8, when the cluster resources were insufficient and the user submitted a new Pod creation request, the Pod would be in the Pending state until a certain Node in the cluster had enough Available Resources. Starting from Kubernetes version 1.8, in this case the scheduler will be scheduled according to the Pod's Priority. During the scheduling, the most appropriate Node will be selected and the Pods of the lower Priority on the Node will be Premmption (Eviction) to release resources for use by the higher Priority Pod. The way to consider Pod Priority in this kind of scheduling is preemptive scheduling in Kubernetes, which is called Preemption for short.
How to turn the Feature on or off
In Kubernetes 1.8, Pod Priority and Preemption, as Alpha features, default is disable. If you want to use this feature, you need to add the following parameters to apiserver and scheduler and restart:
Kube-apiserver xxx-feature-gates=PodPriority=true-runtime-config=scheduling.k8s.io/v1alpha1=true
Kube-scheduler xxx-feature-gates=PodPriority=true
In turn, delete the above parameters and restart to disable.
There is a question: if I turn on this feature and create some PriorityClass and then give it back to some Pod, will there be a problem if I disable this feature at this time?
The answer is no! After disable, the previously set Pod Priority field will continue to exist, but it is of no use. Preemption is turned off. Of course, you can't quote PriorityClass to the new Pods.
Create PriorityClass
After Enable, the next step is to create a PriorityClass:
ApiVersion: scheduling.k8s.io/v1alpha1kind: PriorityClassmetadata: name: high-priorityvalue: 1000000globalDefault: falsedescription: "This priority class should be used for XYZ service pods only."
Note: PriorityClass is not namespace isolated and is global. Therefore, namespace field cannot be set under metadata.
ApiVersion: the runtime-config parameter to be configured when scheduling.k8s.io/v1alpha1 # enable
Metadata.name: sets the name of the PriorityClass
Value: 32-bit integer value. The higher the value, the higher the priority, but it must be less than or equal to 1 billion. Values greater than 1 billion are reserved for critical system Pods in the cluster, indicating that the Pod of the Priority cannot be preempted.
GlobalDefault: true or false, note that there can be only one PriorityClass whose field is true. If none of the PriorityClass's fields are true, then the Priority value of the Pod that does not explicitly refer to PriorityClass is the lowest value of 0. 0.
Description: String, notes for people to read, Kubernetes does not handle
Note:
PriorityClass only affects those Pod that have not been created yet. Once the Pod is created, admission Controller has set the value of the PriorityClass corresponding to the PriorityClassName applied in the Pod Spec to the Priority field of the Pod. It means that any field of PriorityClass, including globalDefault, will not be affected by the Pod that has already been created.
If you delete a PriorityClass, it won't affect the Pod Priority that already references it, but you can't use it to create a new Pod. This is actually obvious.
Create a Pod and reference the PriorityClass
The next step is to create a Pod corresponding to Priority:
ApiVersion: v1kind: Podmetadata: name: nginx labels: env: testspec: containers:-name: nginx image: nginx imagePullPolicy: IfNotPresent priorityClassName: high-priority
If Pod.spec. If the PriorityClass specified in priorityClassName does not exist, the Pod will fail to create
As mentioned earlier, when creating a Pod, Priority Admission Controller will find the corresponding PriorityClass based on PriorityClassName and set its value to Pod.spec.priority.
The current problems of Preemption
Because when preemptively scheduling evict low-priority Pod, there is an elegant termination time (default is 30s). If multiple low-priority Pod of evict are needed on the Node, it may take a long time before the Pod can be dispatched to the Node and start running, so the question is, after such a long time, is this Node still the most suitable for this Pod at this time? Not necessarily! And in large-scale clusters with frequent Pod creation, this result is common. It means that the correct scheduling decision must be correct when it is really implemented.
Consider PDB when preempted pod is not supported, and the plan will be implemented in beta version.
At present, the affinity of Pending pod and victims pod is not considered in premmpted pod: if the node to which the pending pod is scheduled requires the lower Priority pods of evict and the pending pod to be compatible, the current direct evict lower Priority pods may destroy this pod affinity.
Preemption across nodes is not supported. For example, pending Pod M needs to be dispatched to Node A, and the Pending Pod M is based on zone topology anti-compatibility with "Pod N on Node B of the same zone as Node A." the current version of Alpha will cause Pod M to continue Pending can not be scheduled successfully. If cross-boundary preemption is supported later, the Pod N of lower Priority can be evict off Node B, thus ensuring anti-affinity.
Taint Nodes by Condition
Before Kubernetes 1.8, Node Condition directly interfered with scheduling, and the logic was like this, and it was unchangeable:
Kubelet regularly passes Node Condition to kube-apiserver and resides in etcd.
After kube-scheduler watch to Node Condition Pressure, more Pods Bind is blocked to that Node according to the following policy.
Node ConditionScheduler BehaviorMemoryPressureNo new BestEffort pods are scheduled to the node.DiskPressureNo new pods are scheduled to the node.- when Node Condition contains MemoryPressure, BestEffort QoS Pods is no longer allowed to be dispatched to that node;-when Node Condition contains DiskPressure, no pods is allowed to schedule to that node.
Starting with Kubernetes 1.6, kubelet and Node Controller support automatically adding the corresponding built-in Taints to Node based on Node Condition. At that time, these Taints only affect kubelet eviction, not scheduling. What's the difference? The difference is that adding Taints to Node is a soft limit for scheduling, and you can force scheduling to that node by adding the corresponding Tolerations to pods. Prior to 1. 8, Node Condition impact scheduling was a hard limit.
The Map relationship between Node Condition and Taints is as follows:
ConditionTypeCondition StatusEffectKeyReadyTrue-
FalseNoExecutenode.kubernetes.io/notReady
UnknownNoExecutenode.kubernetes.io/unreachableOutOfDiskTrueNoSchedulenode.kubernetes.io/outOfDisk
False-
Unknown-
MemoryPressureTrueNoSchedulenode.kubernetes.io/memoryPressure
False-
Unknown-
DiskPressureTrueNoSchedulenode.kubernetes.io/diskPressure
False-
Unknown-
NetworkUnavailableTrueNoSchedulenode.kubernetes.io/networkUnavailable
False-
Unknown-
That's all for "how to create and apply Pod in Kubernetes 1.8". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.