Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes series tutorials (6) kubernetes resource management and quality of service

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Write at the front

In the previous article, kubernetes series tutorials (5) in-depth mastery of core concepts pod initially introduced an important concept pod in yaml learning kubernetes, followed by a series of kubernetes tutorials pod resource resource management and pod Quality of service quality of service.

1. Pod Resource Management 1.1 resource definition

During the operation of the container, you need to allocate the required resources. How to cooperate with cggroup? The answer is to allocate resources by defining resource. The allocation units of resources are mainly cpu and memory. There are two definitions of resources: requests and limits,requests represent requested resources, which are mainly used for the basis of initial kubernetes scheduling pod, indicating the allocation of resources that must be met. Limits indicates the limit of resources, that is, pod cannot exceed the limit defined by limits. If it exceeds the limit of cggroup, resources can be defined in pod through the following four fields:

Spec.container [] .resources.requests.cpu requests the size of the cpu resource. For example, 0.1 cpu and 100m means to allocate 1x10 cpu;spec.container [] .resources.requests.memory request memory size, which can be expressed in Mgramie GI; spec.container [] .resources.requests.cpu limits the size of cpu, which cannot exceed the threshold and the limit in cggroup. Spec.container [] .resources.invents.memory limits the size of memory, which cannot exceed the threshold, and OOM will occur if the memory is exceeded.

1. Start to learn how to define the resource resources of pod. Take defining nginx-demo as an example, the container requests a cpu resource of 250m and a limit of 500m, the requested memory resource is 128Mi, and the memory resource is limited to 256Mi. Of course, you can also define the resources of multiple containers. The total resource of pod is the sum of multiple containers, as shown below:

[root@node-1 demo] # cat nginx-resource.yaml apiVersion: v1kind: Podmetadata: name: nginx-demo labels: name: nginx-demospec: containers:-name: nginx-demo image: nginx:1.7.9 imagePullPolicy: IfNotPresent ports:-name: nginx-port-80 protocol: TCP containerPort: 80 resources: requests: cpu: 0.25 memory: 128Mi limits: cpu: 500m memory: 256Mi

2. Apply the configuration definition of pod (if the previous pod still exists, delete kubectl delete pod first), or name pod to another name

[root@node-1 demo] # kubectl apply-f nginx-resource.yaml pod/nginx-demo created

3. View the allocation details of pod resources

[root@node-1 demo] # kubectl get podsNAME READY STATUS RESTARTS AGEdemo-7b86696648-8bq7h 1 Union 1 Running 0 12d demography 7b86696648-8qp46 1 Running 0 12ddemo-7b86696648-d6hfw 1 * Running 0 12dnginx-demo 1 * * Running 094s [root @ node-1 demo] # kubectl describe pods nginx-demo Name: Nginx-demoNamespace: defaultPriority: 0Node: node-3/10.254.100.103Start Time: Sat 28 Sep 2019 12:10:49 + 0800Labels: name=nginx-demoAnnotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion": "v1", "kind": "Pod", "metadata": {"annotations": {}, "labels": {"name": "nginx-demo"}, "name": "nginx-demo", "namespace": "default"} "sp...Status: RunningIP: 10.244.2.13Containers: nginx-demo: Container ID: docker://55d28fdc992331c5c58a51154cd072cd6ae37e03e05ae829a97129f85eb5ed79 Image: nginx:1.7.9 Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sat 28 Sep 2019 12:10:51 + 0800 Ready: True Restart Count: 0 Limits: # restrict resource cpu: 500m memory: 256Mi Requests: # request resource cpu: 250m memory: 128Mi Environment:... Omit.

4. How to allocate the resources of Pod? There is no doubt that it is allocated from the node. When we create a pod, if the scheduler kube-scheduler with requests,kubernetes is set up, it will perform two scheduling processes: filter filtering and weight weighing. Kube-scheduler will filter out the eligible node based on the requested resources, and then sort it to filter out the node that is most suitable for running pod, and then run pod on a specific node. Scheduling algorithm and details can refer to the introduction of kubernetes scheduling algorithm. The following is the allocation details of node-3 node resources:

[root@node-1 ~] # kubectl describe node node-3... Omit... the total resources of resources on the Capacity: # node, 110x cpu,2g memory, 110x pod cpu: 1 ephemeral-storage: 51473888Ki hugepages-2Mi: 0 memory: 1882352Ki pods: 110Allocatable: # node allowable allocation of resources Some of the reserved resources will be discharged in the Allocatable category cpu: 1 ephemeral-storage: 47438335103 hugepages-2Mi: 0 memory: 1779952Ki pods: 110System Info: Machine ID: 0ea734564f9a4e2881b866b82d679dfc System UUID: FFCD2939-1BF2-4200-B4FD-8822EBFFF904 Boot ID: 293f49fd-8a7c-49e2-8945-7a4addbd88ca Kernel Version: 3.10.0-957.21.3.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.6.3 Kubelet Version: v1.15.3 Kube-Proxy Version: v1.15.3PodCIDR: 10.244.2.0 Kube-Proxy Version- Terminated Pods: (3 in total) # about the resources running pod on the node In addition to nginx-demo, there are several pod Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE. -default nginx-demo 250m (25%) 500m (50%) 128Mi (7%) 256Mi (14%) 63m kube-system kube-flannel-ds-amd64-jp594 100m (10%) 100m (10%) 50Mi (2%) 50Mi (2%) 14d kube-system kube-proxy-mh3gq 0 (0) 0 (0) 0 (0) 0 (0) 12dAllocated resources: # cpu and memory resources that have been allocated (Total limits may be over 100 percent I.E., overcommitted.) Resource Requests Limits-cpu 350m (35%) 600m (60%) memory 178Mi (10%) 306Mi (17%) ephemeral-storage 0 (0%) 0 (0%) Events: 1.2 Resource allocation principles

The resources requests and limits defined by Pod act on the scheduler kube-sheduler of kubernetes. In fact, the resources defined by cpu and memory are applied on container, and the isolation of resources is achieved through cggroup on the container. Next, we introduce the principle of resource allocation.

Spec.containers [] .resources.requests.cpu acts on CpuShares, indicating the allocation of the weight of cpu. The allocation proportion during contention is spec.containers [] .resources.requests.memory is mainly used for kube-scheduler scheduler, but has no meaning for containers. Spec.containers [] .resources.requests.cpu functions CpuQuota and CpuPeriod in microseconds. The calculation method is: CpuQuota/CpuPeriod, indicating the maximum percentage that can be used by the maximum CPU. For example, 500m means 50% of the resources in 1 cpu are allowed. Spec.containers [] .resources.consums.memory is used in Memory, indicating the maximum amount of available memory in the container. If it exceeds, it will OOM.

Take the nginx-demo defined above as an example to study the parameters that the requests and limits applications defined in pod take effect on docker:

1. View the node node where the pod is located, and schedule the nginx-demo to the node-3 node

[root@node-1] # kubectl get pods-o wide nginx-demoNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-demo 1 Running 0 96m 10.244.2.13 node-3

2. To obtain the id number of the container, you can obtain the id of the container through the containerID of kubectl describe pods nginx-demo, or log in to the node-3 node to obtain the id number of the container through name filtering. There are two pod by default: one is created by pause image, and the other is created by applying image.

[root@node-3 ~] # docker container list | grep nginx55d28fdc9923 84581e99d807 "nginx-g 'daemon of …" 2 hours ago Up 2 hours k8s_nginx-demonginx-demo_default_66958ef7-507a-41cd-a688-7a4976c6a71e_02fe0498ea9b5 k8s.gcr.io/pause:3.1 "/ pause" 2 hours ago Up 2 hours k8s_POD_nginx-demo_default_66958ef7-507a-41cd-a688-7a4976c6a71e_0

3. View the details of docker container

[root@node-3] # docker container inspect 55d28fdc9923 [... Some output is omitted. {"Image": "sha256:84581e99d807a703c9c03bd1a31cd9621815155ac72a7365fd02311264512656", "ResolvConfPath": "/ var/lib/docker/containers/2fe0498ea9b5dfe1eb63eba09b1598a8dfd60ef046562525da4dcf7903a25250/resolv.conf", "HostConfig": {"Binds": ["/ var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/volumes/kubernetes.io~secret/default-token-5qwmc:/var/run/secrets/kubernetes.io/serviceaccount:ro" "/ var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/etc-hosts:/etc/hosts", "/ var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/containers/nginx-demo/1cc072ca:/dev/termination-log"], "ContainerIDFile": "" "LogConfig": {"Type": "json-file", "Config": {"max-size": "100m"}, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864 "Runtime": "runc", "ConsoleSize": [0,0], "Isolation": "", "CpuShares": 256, weight assigned by CPU "Memory" on requests.cpu: 268435456, the size of memory allocation Acting on limits.memory "NanoCpus": 0, "CgroupParent": "kubepods-burstable-pod66958ef7_507a_41cd_a688_7a4976c6a71e.slice", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 100000 Percentage of usage allocated by CPU Work with CpuQuota on limits.cpu "CpuQuota": 50000, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0 "MemoryReservation": 0, "MemorySwap": 268435456, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0,} }] 1.3. Cpu resource testing

The limit of cpu in pod is mainly defined by requests.cpu and limits.cpu. Limits cannot exceed the size of cpu. We verify it through stress image. Stress is a side pressing tool for cpu and memory. The size of cpu is defined by specifying the args parameter. The cpu and memory of monitoring pod can be viewed through kubectl top, depending on monitoring components such as metric-server or promethus. Currently, it is not installed. We check it through docker stats.

1. Define a pod through stress image, allocate 0.25 cores and limit the usage ratio of 0.5 core.

[root@node-1 demo] # cat cpu-demo.yaml apiVersion: v1kind: Podmetadata: name: cpu-demo namespace: default annotations: kubernetes.io/description: "demo for cpu requests and" spec: containers:-name: stress-cpu image: vish/stress resources: requests: cpu: 250m limits: cpu: 500m args:-- cpus-"1"

2. Apply yaml file to generate pod

[root@node-1 demo] # kubectl apply-f cpu-demo.yaml pod/cpu-demo created

3. View the details of pod resource allocation

[root@node-1 demo] # kubectl describe pods cpu-demoName: cpu-demoNamespace: defaultPriority: 0Node: node-2/10.254.100.102Start Time: Sat, 28 Sep 2019 14:33:12 + 0800Labels: Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion": "v1", "kind": "Pod" "metadata": {"annotations": {"kubernetes.io/description": "demo for cpu requests and"}, "name": "cpu-demo", "nam..." Kubernetes.io/description: demo for cpu requests andStatus: RunningIP: 10.244.1.14Containers: stress-cpu: Container ID: docker://14f93767ad37b92beb91e3792678f60c9987bbad3290ae8c29c35a2a80101836 Image: progrium/stress Image ID: docker-pullable://progrium/stress@sha256:e34d56d60f5caae79333cee395aae93b74791d50e3841986420d23c2ee4697bf Port: Host Port: Args:-cpus 1 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sat 28 Sep 2019 14:34:28 + 0800 Finished: Sat, 28 Sep 2019 14:34:28 + 0800 Ready: False Restart Count: 3 Limits: # cpu restricted ratio cpu: 500m Requests: # cpu request size cpu: 250m

4. Log in to a specific node node and view the resource usage details of the container through docker container stats

Viewed through top on the node to which pod belongs, the cpu usage limit percentage is 50%.

Through the above verification, it can be concluded that we define the use of 1 core in the stress container, and the available cpu size is limited to 500 m by limits.cpu. The test verifies that the resources of pod are strictly limited to 50% inside the container or on the host (there is only one cpu on the node machine, and it will be allocated to 25% if there are 2 cpu).

1.4 memory resource testing

1. Verify the effective range of requests.memory and limits.memory through stress image test. Limits.memory defines the amount of memory resources that can be used by the container. When the size set by memory is exceeded, OOM will occur in the container. Define a test container as follows. The maximum memory cannot exceed 512m. Use stress image-vm-bytes to define the memory size of the press side as 256Mi.

[root@node-1 demo] # cat memory-demo.yaml apiVersion: v1kind: Podmetadata: name: memory-stress-demo annotations: kubernetes.io/description: "stress demo for memory limits" spec: containers:-name: memory-stress-limits image: polinux/stress resources: requests: memory: 128Mi limits: memory: 512Mi command: ["stress"] args: ["- vm", "1", "- vm-bytes" "256m", "--vm-hang", "1"]

2. Apply yaml file to generate pod

[root@node-1 demo] # kubectl apply-f memory-demo.yaml pod/memory-stress-demo created [root @ node-1 demo] # kubectl get pods memory-stress-demo-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmemory-stress-demo 1 node-2 1 Running 0 41s 10.244.1.19

3. Check the allocation of resources

[root@node-1 demo] # kubectl describe pods memory-stress-demoName: memory-stress-demoNamespace: defaultPriority: 0Node: node-2/10.254.100.102Start Time: Sat, 28 Sep 2019 15:13:06 + 0800Labels: Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion": "v1", "kind": "Pod" "metadata": {"annotations": {"kubernetes.io/description": "stress demo for memory limits"}, "name": "memory-str..." Kubernetes.io/description: stress demo for memory limitsStatus: RunningIP: 10.244.1.16Containers: memory-stress-limits: Container ID: docker://c7408329cffab2f10dd860e50df87bd8671e65a0f8abb4dae96d059c0cb6bb2d Image: polinux/stress Image ID: docker-pullable://polinux/stress@sha256:6d1825288ddb6b3cec8d3ac8a488c8ec2449334512ecb938483fc2b25cbbdb9a Port: Host Port: stress Args:-- vm 1-- vm- Bytes 256Mi-- vm-hang 1 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sat 28 Sep 2019 15:14:08 + 0800 Finished: Sat, 28 Sep 2019 15:14:08 + 0800 Ready: False Restart Count: 3 Limits: # memory limit size memory: 512Mi Requests: # memory request size memory: 128Mi Environment: Mounts: / var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro)

4. Check the usage of container memory resources and allocate 256 MB of memory. The maximum available memory is 512Mi, and the utilization rate is 50%. At this time, the size does not exceed the limits limit, and the container is running normally.

5. What happens when the internal size of the container exceeds the amount of memory? if we set-- vm-byte to 513m, the container will try to run. After exceeding the memory, OOM,kube-controller-manager will keep trying to restart the container, and the number of RESTARTS will continue to increase.

[root@node-1 demo] # cat memory-demo.yaml apiVersion: v1kind: Podmetadata: name: memory-stress-demo annotations: kubernetes.io/description: "stress demo for memory limits" spec: containers:-name: memory-stress-limits image: polinux/stress resources: requests: memory: 128Mi limits: memory: 512Mi command: ["stress"] args: ["- vm", "1", "--vm-bytes" "520m", "--vm-hang", "1"]. # the number of using 520m memory in the container to check the status of the container as OOMKilled,RESTARTS is increasing, and keep trying to restart [root@node-1 demo] # kubectl get pods memory-stress-demo NAME READY STATUS RESTARTS AGEmemory-stress-demo 0 60s2 1 OOMKilled 3 60s2. Pod quality of service

Quality of service QOS (Quality of Service) is mainly used as an important factor for pod scheduling and eviction. Different QOS have different quality of service, corresponding to different priorities, and are mainly divided into three types of Qos:

BestEffort tries its best to allocate resources. By default, the Qos assigned by resource is not specified, and the priority is the lowest. Resources that can fluctuate in Burstable at least need to be allocated to the requests. Common QOS;Guaranteed resources are fully guaranteed. The resources defined by requests and limits are the same, with the highest priority. 2.1 BestEffort best effort

1. Resource is not defined in Pod, and the default Qos policy is BestEffort, with the lowest priority. When the progress of resource comparison requires the expulsion of evice, priority is to expel the Pod defined by BestEffort. Define a Pod of BestEffort as follows

[root@node-1 demo] # cat nginx-qos-besteffort.yaml apiVersion: v1kind: Podmetadata: name: nginx-qos-besteffort labels: name: nginx-qos-besteffortspec: containers:-name: nginx-qos-besteffort image: nginx:1.7.9 imagePullPolicy: IfNotPresent ports:-name: nginx-port-80 protocol: TCP containerPort: 80 resources: {}

2. Create a pod and view the Qos policy. QosClass is BestEffort.

[root@node-1 demo] # kubectl apply-f nginx-qos-besteffort.yaml pod/nginx-qos-besteffort created View Qos policy [root@node-1 demo] # kubectl get pods nginx-qos-besteffort-o yamlapiVersion: v1kind: Podmetadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion": "v1", "kind": "Pod", "metadata": {"annotations": {}, "labels": {"name": "nginx-qos-besteffort"} "name": "nginx-qos-besteffort", "namespace": "default"}, "spec": {"containers": [{"image": "nginx:1.7.9", "imagePullPolicy": "IfNotPresent", "name": "nginx-qos-besteffort", "ports": [{"containerPort": 80, "name": "nginx-port-80", "protocol": "TCP"}] "resources": {} creationTimestamp: "2019-09-28T11:12:03Z" labels: name: nginx-qos-besteffort name: nginx-qos-besteffort namespace: default resourceVersion: "1802411" selfLink: / api/v1/namespaces/default/pods/nginx-qos-besteffort uid: 56e4a2d5-8645-485d-9362-fe76aad76e74spec: containers:-image: nginx:1.7.9 imagePullPolicy: IfNotPresent name: nginx-qos-besteffort ports:-containerPort: 80 Name: nginx-port-80 protocol: TCP resources: {} terminationMessagePath: / dev/termination-log... Omit... status: hostIP: 10.254.100.102 phase: Running podIP: 10.244.1.21 qosClass: BestEffort # Qos Policy startTime: "2019-09-28T11:12:03Z"

3. Delete the test Pod

[root@node-1 demo] # kubectl delete pods nginx-qos-besteffort pod "nginx-qos-besteffort" deleted2.2 Burstable can fluctuate

1. The quality of service of Pod is Burstable, which is second only to that of Guaranteed. At least one container defines requests, and the resource defined by requests is smaller than that of limits.

[root@node-1 demo] # cat nginx-qos-burstable.yaml apiVersion: v1kind: Podmetadata: name: nginx-qos-burstable labels: name: nginx-qos-burstablespec: containers:-name: nginx-qos-burstable image: nginx:1.7.9 imagePullPolicy: IfNotPresent ports:-name: nginx-port-80 protocol: TCP containerPort: 80 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 200m memory: 256Mi

2. Apply yaml file to generate pod and view Qos type

[root@node-1 demo] # kubectl apply-f nginx-qos-burstable.yaml pod/nginx-qos-burstable created View Qos type [root@node-1 demo] # kubectl describe pods nginx-qos-burstableName: nginx-qos-burstableNamespace: defaultPriority: 0Node: node-2/10.254.100.102Start Time: Sat 28 Sep 2019 19:27:37 + 0800Labels: name=nginx-qos-burstableAnnotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion": "v1", "kind": "Pod", "metadata": {"annotations": {}, "labels": {"name": "nginx-qos-burstable"}, "name": "nginx-qos-burstable" "namespa...Status: RunningIP: 10.244.1.22Containers: nginx-qos-burstable: Container ID: docker://d1324b3953ba6e572bfc63244d4040fee047ed70138b5a4bad033899e818562f Image: nginx:1.7.9 Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sat 28 Sep 2019 19:27:39 + 0800 Ready: True Restart Count: 0 Limits: cpu: 200m memory: 256Mi Requests: cpu: 100m memory: 128Mi Environment: Mounts: / var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-5qwmc: Type: Secret (a volume populated by a Secret) SecretName: default-token-5qwmc Optional: falseQoS Class: Burstable # quality of service is volatile BurstableNode-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300sEvents: Type Reason Age From Message-Normal Scheduled 95s default-scheduler Successfully assigned default/nginx-qos-burstable to node-2 Normal Pulled 94s kubelet Node-2 Container image "nginx:1.7.9" already present on machine Normal Created 94s kubelet, node-2 Created container nginx-qos-burstable Normal Started 93s kubelet, node-2 Started container nginx-qos-burstable2.3 Guaranteed is fully guaranteed

1. The cpu and memory defined in resource must contain requests and limits, and the values of cut requests and limits must be the same, with the highest priority. When scheduling and eviction occur, priority should be given to this type of Qos. A container of nginx-qos-guaranteed is defined below, which is the same as requests.cpu and limits.cpu, and is the same as requests.memory and limits.memory.

[root@node-1 demo] # cat nginx-qos-guaranteed.yaml apiVersion: v1kind: Podmetadata: name: nginx-qos-guaranteed labels: name: nginx-qos-guaranteedspec: containers:-name: nginx-qos-guaranteed image: nginx:1.7.9 imagePullPolicy: IfNotPresent ports:-name: nginx-port-80 protocol: TCP containerPort: 80 resources: requests: cpu: 200m memory: 256Mi limits: cpu: 200m memory: 256Mi

2. Use yaml file to generate pod and check that the Qos type of pod is fully guaranteed Guaranteed.

[root@node-1 demo] # kubectl apply-f nginx-qos-guaranteed.yaml pod/nginx-qos-guaranteed created [root @ node-1 demo] # kubectl describe pods nginx-qos-guaranteedName: nginx-qos-guaranteedNamespace: defaultPriority: 0Node: node-2/10.254.100.102Start Time: Sat 28 Sep 2019 19:37:15 + 0800Labels: name=nginx-qos-guaranteedAnnotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion": "v1", "kind": "Pod", "metadata": {"annotations": {}, "labels": {"name": "nginx-qos-guaranteed"}, "name": "nginx-qos-guaranteed" "names...Status: RunningIP: 10.244.1.23Containers: nginx-qos-guaranteed: Container ID: docker://cf533e0e331f49db4e9effb0fbb9249834721f8dba369d281c8047542b9f032c Image: nginx:1.7.9 Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sat 28 Sep 2019 19:37:16 + 0800 Ready: True Restart Count: 0 Limits: cpu: 200m memory: 256Mi Requests: cpu: 200m memory: 256Mi Environment: Mounts: / var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-5qwmc: Type: Secret (a volume populated by a Secret) SecretName: default-token-5qwmc Optional: falseQoS Class: Guaranteed # Service quality can be fully guaranteed: GuaranteedNode-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300sEvents: Type Reason Age From Message-Normal Scheduled 25s default-scheduler Successfully assigned default/nginx-qos-guaranteed to node-2 Normal Pulled 24s kubelet Node-2 Container image "nginx:1.7.9" already present on machine Normal Created 24s kubelet, node-2 Created container nginx-qos-guaranteed Normal Started 24s kubelet, node-2 Started container nginx-qos-guaranteed is written at the end

This chapter is the sixth article in a series of kubernetes tutorials. By introducing the allocation of resource resources and quality of service (Qos), there are suggestions on the use of nodes in resource:

Requests and limits resource definitions are recommended not to exceed 1:2 to avoid resource contention caused by excessive allocation of resources. Resource is not defined by default in OOM;pod. It is recommended to define a limitrange to namespace to ensure that pod can allocate resources. To prevent excessive resources on node and machine hang resident or OOM, it is recommended to set reserved and expelled resources on node, such as reserved resources-system-reserved=cpu=200m,memory=1G, expulsion conditions-eviction hard=memory.available.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report