Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Kubernetes container scheduling

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to use Kubernetes container scheduling". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's ideas to study and learn "how to use Kubernetes container scheduling".

Node scheduling

According to the native Kubernetes behavior, by default, the pod in the Rancher 2.0 workload will be distributed on nodes (hosts) that are schedulable and have sufficient available capacity. But just like version 1.6, Rancher 2.0 also helps:

Run all pod on a specific node

Using tags for node scheduling

The following is the scheduling method in 1.6 UI. Rancher allows you to run all containers on a specific host, specify hard / soft host tags, or use affinity / anti-affinity rules when deploying services.

The following is the corresponding node scheduling UI in Rancher 2.0, which provides the same functionality when deploying workloads.

Rancher uses the underlying native Kubernetes construct to specify the affinity / anti-affinity of the node.

In the following example, we will see how to use the node scheduling option to schedule the workload pod, and then look at the comparison between the Kubernetes YAML specification and the Rancher 1.6 Docker Compose configuration.

Example: run all Pod on a specific node

When you deploy a workload (navigate to Cluster > Project > Workloads), you can schedule all pod in the workload to a specific node.

Here, I use the nginx image on a specific node to deploy the workload of scale = 2.

If a node has sufficient computing resources available, Rancher selects that node; if hostPort is used, port collisions do not occur. If the workload is exposed using a nodePort that conflicts with another workload, the deployment can be created successfully, but it does not create a nodePort service. In this way, the workload will not be exposed at all.

On the workload / Workload tab, you can list workloads by node. Here, I can see that both pod of my Nginx workload are arranged on the specified node:

The scheduling rules in the Kubernetes pod specification are as follows:

Example: affinity / anti-affinity of host tags * *

I added the tag foo = bar to node1 in the Rancher 2.0 cluster to test the tag-based node scheduling rules.

Host label affinity: hard

The following figure shows how to specify the affinity rules for host tags in Rancher 2.0 UI. The hard affinity rule means that the selected host must meet all scheduling rules. If such a host is not found, the workload cannot be deployed.

In PodSpec YAML, this rule is converted to the field nodeAffinity. Also note that I have included Rancher 1.6 docker-compose.yml to implement the same scheduling behavior using tags.

Host label affinity: soft

If you are a Rancher 1.6 user, you must know that soft affinity rules mean that the scheduler will try to deploy the application according to the rules, but it can be successfully deployed even if a host does not meet the rules. Here is how to specify this rule in Rancher 2.0 UI:

The corresponding YAML specification for pod is as follows:

Host label anti-affinity

In addition to the key = value host label matching rule, the Kubernetes scheduling structure supports the following operators:

Therefore, to achieve anti-affinity, you can use the operators NotIn and DoesNotExist as node labels.

Scheduling using container tags

This feature in Rancher 1.6 allows you to schedule containers to hosts of containers with specific labels. To do this on Rancher 2.0, use the Kubernetes inter-pod affinity and anti-affinity features:

Kubernetes allows you to constrain which nodes pod can be dispatched to based on pod tags rather than node tags.

One of the most common scheduling functions in Rancher 1.6is to use tags on the container to anti-affinity the service itself. To replicate this behavior in Rancher 2.0, we can use pod anti-affinity constructs in the Kubernetes YAML specification. For example, consider using Nginx Web workloads. To ensure that the pod in this workload is not on the same host, you can use the podAntiAffinity construct, as shown below. By specifying podAntiAffinity with tags, we can ensure that each copy of Nginx does not coexist on a single node.

With Rancher CLI, you can deploy this workload to a Kubernetes cluster. Notice that the above deployment specifies three replicas, and I have three schedulable nodes in the Kubernetes cluster.

Because podAntiAffinity is specified, the three pod end up on different nodes. To further examine how podAntiAffinity is applied, I can expand the deployment to four pod. Note that the fourth podAntiAffinityrule cannot be dispatched because the scheduler cannot find another node that satisfies the pod.

Resource-based scheduling

When you create a service in Rancher 1.6, you can specify memory reservation and mCPU reservation in the Security / Host tab of UI. Cattle arranges the container of the service to a host with sufficient available computing resources.

In Rancher 2.0, you can use resources.requests.memory and resources.requests.cpu under the pod container specification to specify the memory and CPU resources required for the workload pod.

When you specify these resource requests, the Kubernetes scheduler allocates pod to nodes with sufficient capacity.

Schedule specific services to hosts only

Rancher 1.6 can specify container tags on the host, so that only specific containers are scheduled to it.

To do this in Rancher 2.0, you can use the corresponding Kubernetes's "add node taints (such as host tags) and use tolerances" feature in the pod specification:

Global service

In Rancher 1.6, a global service is a service that deploys a container on each host in the environment:

If the service is labeled io.rancher.scheduler.global:'true', the Rancher 1.6 scheduler will schedule the service container on each host in the environment. "as described in the documentation, if a new host is added to the environment and the host meets the host requirements for the global service, Rancher automatically starts the service."

The following example is an example of a global service in Rancher 1.6. Note that simply placing the required tags is enough to globalize the service.

How do we deploy global services in Rancher 2.0 using Kubernetes?

To do this, Rancher deploys Kubernetes DaemonSet objects for the user's workload. The function of DaemonSet is exactly the same as that of the Rancher 1.6 global service. The Kubernetes scheduler will deploy a pod on each node of the cluster, and as new nodes are added, the scheduler will launch a new pod on them, provided they match the scheduling requirements of the workload.

In addition, in 2.0, you can restrict DaemonSet deployment to nodes with specific tags

Deploy DaemonSet using Rancher 2.0 UI

If you are a Rancher 1.6 user and want to use UI to migrate global services to Rancher 2.0, navigate to the Cluster > Project > Workloads view. When deploying a workload, you can select the following workload types:

This is the Kubernetes YAML specification for the above DaemonSetworkload:

From Docker Compose to Kubernetes YAML

To migrate the Rancher 1.6 global service to Rancher 2.0 using the Compose configuration, follow these steps.

You can use the Kompose tool to convert the docker-compose.yml file from Rancher 1.6 to Kubernetes YAML, and then use the Kubectl client tool in the Kubernetes cluster or Rancher CLI to deploy the application.

Think back to the docker-compose.yml specification mentioned above, where the Nginx service is the global service. Here is how to convert it to Kubernetes YAML using Kompose:

Let's start by configuring Rancher CLI for your Kubernetes cluster and deploying the generated *-daemonset.yaml file.

As shown above, my Kubernetes cluster has two worker nodes that can schedule workloads, and the deployment global-daemonset.yaml launches two pod for Daemonset, one pod on each node.

Thank you for your reading. the above is the content of "how to use Kubernetes container scheduling". After the study of this article, I believe you have a deeper understanding of how to use Kubernetes container scheduling, and the specific usage needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report