In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Docker uses cgroup to control the resource quota used by the container, including CPU, memory and disk, which basically covers the common resource quota and usage control.
Introduction to cgroup
Cgroup, the abbreviation of Control Groups, is a mechanism provided by Linux kernel to limit, record and isolate the physical resources used by process groups (such as cpu, memory, disk IO, etc.). It is used to control process resources by LXC, docker and many other projects. Cgroup the Linux kernel function that manages arbitrary processes in groups. Cgroup itself is the infrastructure that provides the function and interface for packetized management of processes. Specific resource management functions such as Imax O or memory allocation control are realized through this function. These specific resource management functions are called cgroup subsystems and are implemented by the following subsystems:
Blkio: sets the input and output control that limits each block device. For example: disk, CD, usb and so on. Cpu: use the scheduler to provide cpu access to cgroup tasks. Cpuacct: generate cpu resource reports for cgroup tasks. Cpuset: if it is a multi-core cpu, this subsystem allocates separate cpu and memory for the cgroup task. Devices: allow or deny cgroup task access to the device. Freezer: pauses and resumes cgroup tasks. Memory: set memory limits for each cgroup and generate memory resource reports. Net_cls: Mark each network packet for cgroup convenience. Ns: namespace subsystem. Perf_event: added the ability to monitor and trace each group, that is, to monitor all threads belonging to a particular group and threads running on a particular CPU.
At present, docker only uses some of these subsystems to control the quota and use of resources.
You can use the stress tool to test CPU and memory. Use the Dockerfile below to create a Ubuntu-based image of the stress tool.
FROM ubuntu:14.04RUN apt-get update & & apt-get install stress
CPU resource quota control
CPU share control
Docker provides the-cpu-shares parameter, which specifies the CPU share value used by the container when creating the container. Examples of use:
Use the command docker run-tid-cpu-shares 100 ubuntu:stress to create the container, and the cpu share configuration of the resulting cgroup can be found in the following file:
Root@ubuntu:~# cat / sys/fs/cgroup/cpu/docker//cpu.shares100
The value of cpu-shares does not guarantee the availability of 1 vcpu or how many GHz of CPU resources, just a flexible weighting value.
By default, the cpu share of each docker container is 1024. The share of a single container is meaningless, and the effect of container cpu weighting can only be realized when multiple containers are running at the same time. For example, the cpu shares of two containers An and B are 1000 and 500, respectively. When cpu allocates time slices, container A has twice as much chance of getting CPU time slices as container B. however, the result of allocation depends on the running status of hosts and other containers at that time. In fact, there is no guarantee that container A will get CPU time slices. For example, if the process of container An is always idle, container B can get more CPU time slices than container A. In extreme cases, for example, if there is only one container running on the host, even if its cpu share is only 50, it can monopolize the cpu resources of the entire host.
Cgroups takes effect only when the resources allocated by the container are scarce, that is, when the resources used by the container need to be limited. Therefore, it is not possible to determine how much cpu resources are allocated to a container based solely on its cpu share, and the result of resource allocation depends on the cpu allocation of other containers running at the same time and the running of processes in the container.
CPU cycle control
Docker provides two parameters-cpu-period and-cpu-quota to control the CPU clock cycle to which the container can be allocated. -cpu-period is used to specify how often the container redistributes the use of CPU, while-cpu-quota is used to specify the maximum amount of time that can be spent running the container during this period. Unlike-cpu-shares, this configuration specifies an absolute value, and there is no flexibility in it, so the container's use of CPU resources will never exceed the configured value.
Cpu-period and cpu-quota are in microseconds (μ s). The minimum value of cpu-period is 1000 microseconds, the maximum is 1 second (10 ^ 6 μ s), and the default value is 0.1 second (100000 μ s). The default value of cpu-quota is-1, which means that there is no control.
For example, if the container process needs to use a single CPU for 0.2 seconds every second, you can set cpu-period to 1000000 (that is, 1 second) and cpu-quota to 200000 (0.2 seconds). Of course, in a multicore case, if you allow the container process to fully occupy two CPU, you can set cpu-period to 100000 (that is, 0.1s) and cpu-quota to 200000 (0.2s).
Examples of use:
Use the command docker run-tid-cpu-period 100000-cpu-quota 200000 ubuntu to create a container, and the cpu cycle configuration of the resulting cgroup can be found in the following file:
Root@ubuntu:~# cat / sys/fs/cgroup/cpu/docker//cpu.cfs_period_us100000root@ubuntu:~# cat / sys/fs/cgroup/cpu/docker//cpu.cfs_quota_us200000
For a detailed introduction to the configuration of cpu-shares, cpu-period, and cpu-quota, you can read in depth this chapter on CPU in the RedHat documentation.
CPU core control
For multicore CPU servers, docker can also control which cpu cores and memory nodes are limited to container operation, that is, using the-cpuset-cpus and-cpuset-mems parameters. It is especially useful for servers with NUMA topologies (with multiple CPU, multiple memory nodes) to optimize the performance of containers that require high-performance computing. If the server has only one memory node, the configuration of-cpuset-mems will have little effect.
Examples of use:
The command docker run-tid-name cpu1-cpuset-cpus 0-2 ubuntu means that the created container can only use 0, 1, and 2 kernels. The cpu kernel configuration of the resulting cgroup is as follows:
Root@ubuntu:~# cat / sys/fs/cgroup/cpuset/docker//cpuset.cpus0-2
Through docker exec taskset-c-p 1 (the number of the first process inside the container is usually 1), you can see the binding relationship between the process in the container and the CPU kernel, which can be considered to achieve the purpose of binding the CPU kernel.
Mixed use of CPU quota control parameters
In the above parameters, cpu-shares control only occurs when containers compete for time slices of the same kernel. If you specify that container A uses kernel 0 and container B only uses kernel 1 through cpuset-cpus, only these two containers use the corresponding kernel on the host. They each take up all kernel resources, and cpu-shares has no obvious effect.
The two parameters cpu-period and cpu-quota are generally used together. In the case of a single core or forcing the container to use a cpu kernel through cpuset-cpus, even if cpu-quota exceeds cpu-period, it will not cause the container to use more CPU resources.
Cpuset-cpus and cpuset-mems are only valid on servers on multi-core and multi-memory nodes, and must match the actual physical configuration, otherwise the purpose of resource control can not be achieved.
When the system has multiple CPU cores, it needs to be tested easily by using cpuset-cpus as the container CPU kernel.
Try the following command to create a container for testing:
Docker run-tid-name cpu2-cpuset-cpus 3-cpu-shares 512 ubuntu:stress stress-c 10docker run-tid-name cpu3-cpuset-cpus 3-cpu-shares 1024 ubuntu:stress stress-c 10
The ubuntu:stress image above installs the stress tool to test the load of CPU and memory. The two-container command stress-c 10 processes will give the system a random load, generating 10 processes, each of which repeatedly calculates the square root of the random number generated by rand () until the resources are exhausted.
It is observed that the CPU trial rate on the host is shown in the following figure, the utilization rate of the third kernel is close to 100%, and the CPU utilization of a batch of processes is obviously compared with that of 2:1:
The CPU usage of the container cpu2 is as follows:
The CPU of the container cpu3 uses the following figure:
After entering the container, using the top command, you can clearly see the comparison of resource usage between containers, and achieve the purpose of binding the CPU kernel.
Note: if you use a tool such as nsenter to enter the container and test it with stress-c 10, you can find that the limitation of cpuset-cpus can be broken, thus allowing the stress test process to use all the host CPU kernels. This is because nsenter mounts directly into the container's namespace, breaking the cgroup control in the namespace.
Memory quota control
Like the CPU control, docker also provides several parameters to control the container's memory usage quota, which can control various memory aspects such as the swap size of the container, the amount of available memory, and so on. The main parameters are as follows:
Memory-swappiness: controls the process's tendency to swap physical memory to swap partitions, with a default factor of 60. The smaller the coefficient, the more likely you are to use physical memory. Values range from 0 to 100. A value of 100 means to use swap partitions as far as possible, and a value of 0 means to disable the container swap feature (unlike the host, which is set to 0 does not guarantee that swap will not be used). -kernel-memory: kernel memory, which will not be swapped to swap. In general, changes are not recommended, you can refer directly to the official documentation of docker. -memory: sets the maximum memory limit used by the container. The default unit is byte, and you can use strings with units such as K, G, M, etc. -memory-reservation: enable flexible memory sharing, which allows the container to use memory as much as possible when the host has sufficient resources. When memory competition or low memory is detected, the container's memory is forced to be reduced to the memory size specified by memory-reservation. Officially, when this option is not set, some containers may take up a lot of memory for a long time, resulting in a loss of performance. -memory-swap: equal to the sum of memory and swap partition sizes. A setting of-1 means that the size of swap partitions is infinite. The default unit is byte, and you can use strings with units such as K, G, M, etc. If the setting value of-memory-swap is less than the value of-memory, the default value is used, which is twice the value of-memory-swap.
By default, the container can use all the free memory on the host.
Similar to CPU's cgroups configuration, docker automatically creates the corresponding cgroup configuration file in the directory / sys/fs/cgroup/memory/docker/ for the container, such as the following file:
These files correspond to the configuration of docker one by one, and you can refer to the memory section of the RedHat documentation Resource_Management_Guide to see what they do.
Example of using memory quota control
Set the memory limit of the container, as shown in the following reference command:
Docker run-tid-name mem1-memory 128m ubuntu:stress / bin/bash
By default, docker allocates the same size swap partition to the container in addition to the memory size specified by-memory, that is, the container created by the above command can actually use up to 256MB memory instead of 128MB memory. If you need to customize the swap partition size, you can control it by using the-memory-swap parameter together.
For the container created by the above command, you can see that in the configuration file of cgroups, the memory size of the container is 128MB (128x1024mm 134217728B), and the memory and swap add up to 256MB (256x1024 × 10246268435456B).
Cat / sys/fs/cgroup/memory/docker//memory.limit_in_bytes134217728cat / sys/fs/cgroup/memory/docker//memory.memsw.limit_in_bytes268435456
Note: when executing the above command, the command line may output the following warning:
WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.
This is because cgroup is not enabled by default on the host to control the swap partition. You can modify the grub startup parameters by referring to the official docker documentation.
In the container, you can stress test the container's memory and confirm the memory by using the following stress command.
Stress-vm 1-vm-bytes 256m-vm-hang 0 & stress-vm 1-vm-bytes 250m-vm-hang 0 &
You can find that when using 256MB for stress testing, the process was killed by OOM because the memory limit (128MB memory + 128MB swap) was exceeded. When using 250MB for stress testing, the process runs normally, and you can see that the container's memory is fully loaded through docker stats.
Disk IO quota control
Compared with the quota control of CPU and memory, docker's control of disk IO is relatively immature, and most of them must be used with host devices. It mainly includes the following parameters:
-device-read-bps: limit the read speed (bytes per second) on this device, which can be kb, mb, or gb. -device-read-iops: limits the read speed of a specified device by reading IO per second. -device-write-bps: limit the write speed on this device (bytes per second), which can be kb, mb, or gb. -device-write-iops: limits the write speed of a specified device by writing IO times per second. -blkio-weight: the weighting value of the container's default disk IO. Valid values range from 10 to 100. -blkio-weight-device: IO weighted control for specific devices. Its format is DEVICE_NAME:WEIGHT
Disk IO quota control example
Blkio-weight
For-blkio-weight to take effect, you need to ensure that the scheduling algorithm of IO is CFQ. You can view it in the following ways:
Root@ubuntu:~# cat / sys/block/sda/queue/schedulernoop [deadline] cfq
Create two containers with different-blkio-weight values using the following command:
Docker run-ti-rm-blkio-weight 100ubuntu:stressdocker run-ti-rm-blkio-weight 1000 ubuntu:stress
Execute the following dd command in the container at the same time to test:
Time dd if=/dev/zero of=test.out bs=1M count=1024 oflag=direct
The final output is shown in the following figure:
In my test environment did not achieve the desired test results, through the docker official blkio-weight doesn't take effect in docker Docker version 1.8.1 # 16173, we can find that this problem exists in some environments, but the docker official did not give a solution.
Device-write-bps
Use the following command to create a container and execute the command to verify the write speed limit.
Docker run-tid-name disk1-device-write-bps / dev/sda:1mb ubuntu:stress
Verify the write speed through dd, and the output is shown below:
You can see that the write disk speed of the container is successfully limited to 1MB/s. Other disk IO limit parameters such as device-read-bps can be validated in a similar manner.
Container space size limit
When docker uses devicemapper as the storage driver, the default maximum size for each container and image is 10G. If you need to adjust it, you can use dm.basesize to specify it in the daemon startup parameters. However, it is important to note that changing this value will not only restart the docker daemon service, but also cause all local images and containers on the host to be cleaned up.
There is no such limitation when using other storage drivers such as aufs or overlay.
~ ~ all the screenshot test environments above are hosted by Ubuntu 14.04.4 and the docker version is 1.10.3 screenshots.
The above is the whole content of this article, I hope it will be helpful to your study, and I also hope that you will support it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.