Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Scheduling of pod Resources in kubernetes

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Introduction to kubernetes

Kubernetes, K8s for short, is an abbreviation made by using 8 instead of 8 characters "ubernete". Kubernetes is an open source application used to manage containerized applications on multiple hosts in the cloud platform. The goal of Kubernetes is to make the deployment of containerized applications simple and efficient. Kubernetes provides a mechanism for application deployment, planning, updating, and maintenance.

Introduction to Pod

Pod is the smallest / simplest basic unit that Kubernetes creates or deploys, and a Pod represents a process running on the cluster.

An Pod encapsulates an application container (which can also have multiple containers), storage resources, a separate network IP, and policy options to manage and control how the container operates. Pod represents a unit of deployment: an instance of a single application in Kubernetes, which may consist of resources shared by a single container or multiple containers.

Usually, the default scheduling method of K8s is used, but in some cases, we need to run pod on node with a characteristic label to run it all. At this time, the scheduling policy of pod cannot use the default scheduling policy of K8s. At this time, you need to specify a scheduling policy and tell K8s which node (nodes) you need to schedule pod to.

NodeSelector

Normally, a scheduling strategy such as nodeSelector is used directly. Labels (tag) is a common way to tag resources in K8s. We can mark node with a special tag, and then nodeSelector will schedule the pod to the node with the specified labels.

Here's an example:

First, look at the label information of node and the label of node with the following command:

$kubectl get nodes-- show-labelsNAME STATUS ROLES AGE VERSION LABELSmaster Ready master 147d v1.10.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=node02 Ready 67d v1.10.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,course=k8s,kubernetes.io/hostname=node02node03 Ready 127d v1.10.0 beta.kubernetes.io/arch=amd64 Beta.kubernetes.io/os=linux,jnlp=haimaxy,kubernetes.io/hostname=node03

You can then add a label to the node02 node:

$kubectl label nodes node02 com=yijiadashujunode "node02" labeled

You can then check whether the above label is valid by using the-- show-labels parameter above. After the node is tagged, you can use these tags during scheduling. You only need to add the nodeSelector field in the spec field of Pod, which is the label of the node to be scheduled. For example, to force the Pod to be dispatched to the node02 node, we can use nodeSelector to express: (pod-selector-demo.yaml)

ApiVersion: v1kind: Podmetadata: labels: app: busybox-pod name: test-busyboxspec: containers:-command:-sleep-"3600" image: busybox imagePullPolicy: Always name: test-busybox nodeSelector: com: yijiadashuju

Then, after executing the pod-selector-demo.yaml file, you can view the node information on which pod is running with the following command

Kubectl get pod-o wide-n default

You can also use the description command to see which node pod is dispatched to:

$kubectl create-f pod-selector-demo.yamlpod "test-busybox" created$ kubectl describe pod test-busyboxName: test-busyboxNamespace: defaultNode: node02/10.151.30.63.QoS Class: BestEffortNode-Selectors: com=youdianzhishiTolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300sEvents: Type Reason Age From Message-Normal SuccessfulMountVolume 55s kubelet Node02 MountVolume.SetUp succeeded for volume "default-token-n9w2d" Normal Scheduled 54s default-scheduler Successfully assigned test-busybox to node02 Normal Pulling 54s kubelet, node02 pulling image "busybox" Normal Pulled 40s kubelet, node02 Successfully pulled image "busybox" Normal Created 40s kubelet, node02 Created container Normal Started 40s kubelet, node02 Started container

As you can see from the above execution results, pod is on the node02 node through the default default-scheduler scheduler. However, this kind of scheduling is mandatory. If there are insufficient resources on the node02, the state of the pod will always be the pending state. This is the use of nodeselector.

Through the above introduction, we can see that nodeselector is very convenient to use, but there are still many shortcomings, that is, it is not flexible enough, the control granularity is too large, there are still a lot of inconvenience in practical use. Next, let's take a look at first affinity and anti-affinity scheduling.

Affinity and anti-affinity scheduling

The default scheduling process for K8s actually goes through two phases: predicates and priorities. Using the default scheduling process, k8s dispatches pod to nodes with abundant resources, uses nodeselector scheduling method, and dispatches pod to pod with a specified label. Then in the actual production environment of ongoing, we need to schedule the pod to a group of node with some silent label to meet the actual demand. At this time, we need nodeAffinity (node affinity), podAffinity (pod affinity) and podAntiAffinity (pod anti-affinity).

Affinity can be divided into hard and soft affinity.

Soft compatibility: if the scheduling does not meet the requirements, you can continue to schedule, that is, you can meet the best, but it doesn't matter.

Hard compatibility: it means that specific requirements must be met when scheduling. If not, pod will not be scheduled to the current node.

Rules can be set:

Soft policy: preferredDuringSchedulingIgnoredDuringExecution

Hard policy: requiredDuringSchedulingIgnoredDuringExecution

NodeAffinity node affinity

Node affinity is mainly used to control which nodes pod can be deployed on and which nodes cannot be deployed. It can do some simple logical combinations, not just simple equality matching.

Next, let's take a look at an example. Use Deployment to manage three copies of pod, and use nodeAffinity to control the scheduling of pod. The following example is: (node-affinity-demo.yaml)

ApiVersion: apps/v1beta1kind: Deploymentmetadata: name: affinity labels: app: affinityspec: replicas: 3 revisionHistoryLimit: 15 template: metadata: labels: app: affinity role: test spec: containers:-name: nginx image: nginx:1.7.9 ports:-containerPort: 80 name: nginxweb affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: # hard Policy NodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostname operator: NotIn values:-node03 preferredDuringSchedulingIgnoredDuringExecution: # soft policy-weight: 1 preference: matchExpressions:-key: com operator: In Values:-yijiadashuju

When scheduling this pod, the first requirement is not to run on the node03 node, but if any node satisfies that the labels is com:yijiadashuju, it will be dispatched to this node first.

Next, take a look at the node information:

$kubectl get nodes-- show-labelsNAME STATUS ROLES AGE VERSION LABELSmaster Ready master 154d v1.10.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=node02 Ready 74d v1.10.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,com=yijiadashuju,course=k8s Kubernetes.io/hostname=node02node03 Ready 134d v1.10.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,jnlp=haimaxy,kubernetes.io/hostname=node03

You can see that the node02 node has the label of com=yijiadashuju, which will be dispatched to this node first as required, then create a pod, and then use the descirbe command to check the scheduling.

$kubectl create-f node-affinity-demo.yamldeployment.apps "affinity" created$ kubectl get pods-l app=affinity-o wideNAME READY STATUS RESTARTS AGE IP NODEaffinity-7b4c946854-5gfln 1 Running 0 47s 10.244.214 node02affinity-7b4c946854-l8b47 1 Running 0 47s 10.244.215 node02affinity- 7b4c946854-r86p5 1/1 Running 0 47s 10.244.4.213 node02

From the results, you can see that the pod is deployed to the node02 node.

Now there are several operators provided by Kubernetes

The value of In:label in a tag the value of NotIn:label is not in a tag the value of Gt:label is greater than a value Lt:label is less than a value Exists: a label exists DoesNotExist: a label does not exist

If there are multiple options under nodeSelectorTerms, any of the conditions can be satisfied; if matchExpressions has multiple options, these conditions must be met at the same time to schedule POD properly.

PodAffinity pod affinity

The affinity of pod is mainly used to solve the problem of which pod can be deployed in the same cluster as pod, that is, the topology domain (cluster composed of node), while the anti-affinity of pod is to solve the problem of which pod cannot be deployed with pod, both of which are used to solve the problem of deployment between pod. It should be noted that inter-Pod affinity and anti-affinity require a lot of processing, which can significantly slow down scheduling in large clusters, which is not recommended in clusters with hundreds of nodes, and Pod anti-affinity requires consistent marking of nodes, that is, each node in the cluster must have the appropriate label to match topologyKey. Unexpected behavior can occur if some or all of the nodes are missing the specified topologyKey tag.

The following is an example of inter-pod affinity:

Pods/pod-with-pod-affinity.yaml:

ApiVersion: v1kind: Podmetadata: name: with-pod-affinityspec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: matchExpressions:-key: security operator: In values:-S1 topologyKey: failure-domain.beta.kubernetes.io/zone podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution:-weight: 100 podAffinityTerm: labelSelector: MatchExpressions:-key: security operator: In values:-S2 topologyKey: failure-domain.beta.kubernetes.io/zone containers:-name: with-pod-affinity image: k8s.gcr.io/pause:2.0podAntiAffinity pod anti-compatibility

The following is an example of an pod anti-affinity yaml file:

ApiVersion: apps/v1kind: Deploymentmetadata: name: redis-cachespec: selector: matchLabels: app: store replicas: 3 template: metadata: labels: app: store spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: matchExpressions:-key: app operator: In values: -store topologyKey: "kubernetes.io/hostname" containers:-name: redis-server image: redis:3.2-alpine

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report