In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
By default, a container has no resource restrictions, and the container can use all resources scheduled by the kernel. Docke provides parameters to control the memory, CPU, and block IO used by the container when the container is started.
The only thing that can really be controlled is memory and CPU.
Memory
Memory is an incompressible resource
OOME
In Linxu systems, if the kernel detects that the host does not have enough memory to invoke some important functions of the system, it will throw an OOME (Out Of Memory Exception: memory exception) to kill some processes to free memory.
Once OOME occurs, any process can be killed, including docker daemon. To this end, Docker specifically adjusts the OOM priority of the docker daemon to prevent it from being killed, but the priority of the container is not adjusted. When there is insufficient memory, the kernel will get a score for each process according to its own scheduling algorithm, and then kill the process with the highest score to free memory.
You can specify the-- oom-score-adj parameter when you docker run (default is 0). This is the priority of the container to be killed, which will affect the score. The higher the value, the easier it is to be killed. This parameter only affects the final score, the priority kill process is to look at the score, the parameter is small may still have the highest score after calculation, will still be priority to kill.
You can specify the-oom-kill-disable=true parameter to specify that certain important containers are not allowed to be killed by OOM.
-- oom-kill-disable Disable OOM Killer--oom-score-adj int Tune host's OOM preferences (- 1000 to 1000) memory limit option description-- mmae Murray memory memory limit, the format is numeric plus units, the units can be bmeme kmeme mpeng. The minimum is 4M--memory-swap memory + total swap partition size limit. The format is the same as above. Must be larger than the-m setting-memory-swappiness by default, the host can swap the anonymous page (anonymous page) used by the container. You can set a value between 0 and 100, which represents the ratio allowed for swap-memory-reservation sets a soft limit for memory usage. If docker finds that the host is out of memory, it will perform the OOM operation. This value must be less than-- the value set by memory-- the size of the kernel memory that the kernel-memory container can use, and the minimum value is whether the 4m--oom-kill-disable kills the container when running OOM. This option can be set to false only if-m is set, otherwise the container will run out of host memory and cause the host application to be killed.
-- memory-swap parameter
This parameter should be combined with-m to take effect. The description in the table is relatively simple and is a general usage.
General usage: larger than-m, memory + total size limit of swap partition
Disable swap: as big as-m, so that-- memory and-- memory-swap have the same limit, and the available swap resource is 0, which is equivalent to disabling.
Default setting: set to 0 or not. If the host (Docker Host) has swap enabled, the swap available to the container is twice the memory limit.
Unlimited: set to-1. If swap is enabled on the host (Docker Host), the container can use all swap resources of the host.
Using the free command in the container, the swap space you see does not reflect the above limitations and is of no reference value.
CPU
CPU is a compressible resource.
By default, each container can use all the cpu resources on the host. The resource scheduling algorithm used by most systems is CFS (fully Fair Scheduler), which schedules each worker process fairly. Processes can be divided into two categories: CPU-intensive (low priority) and IO-intensive (high priority). The system kernel monitors the system process in real time. When a process takes up cpu resources for too long, the kernel will adjust the priority of the process.
Realtime scheduling is also supported after docker 1.13.
There are three CPU resource allocation strategies as follows:
Only a few cores can be used to describe which core option can be used-c,-- cpu-shares intcpu resources are provided to a group of containers, and containers in the group use cpu resources proportionally. When the container is idle, the cpu resources are occupied by containers with heavy load (proportional to compression). When idle, the cpu resources are run. Cpu resources are assigned to other containers-cpus decimal specifies the number of cores of cpu, which directly limits the cpu resources available to the container-cpuset-cpus string specifies which cpu core the container can only run on (bind cpu) The core uses 0meme 1pm 2pm 3 numbered CPU Share
Docker sets the parameter CPU share for the container to-c,-- cpu-shares, whose value is an integer.
Docker allows the user to set a number for each container, which represents the container's CPU share, and the share for each container is 1024 by default. When there are multiple containers running on the host, the proportion of CPU time consumed by each container is the proportion of its share in the total. For example, if there are two containers on the host that have been using CPU (to simplify understanding, without considering other processes on the host), the CPU share of both containers is 1024, then the CPU utilization of both containers is 50%. If you set the share of one of the containers to 512, the CPU utilization rates of the two containers are 67% and 33%, respectively. If you delete the container with a share of 1024, the CPU utilization of the remaining containers will be 100%.
To sum up, in this case, docker dynamically adjusts the percentage of time each container uses CPU according to the containers and processes running on the host. This has the advantage of ensuring that CPU is running as much as possible, making full use of CPU resources, and ensuring the relative fairness of all containers; the disadvantage is that it is impossible to specify a definite value for the container to use CPU.
CPU kernel number
Since version 1.13, docker provides-- the cpus parameter limits the number of CPU cores that the container can use. This feature allows us to set container CPU usage more precisely, which is easier to understand and therefore more commonly used.
-- cpus is followed by a floating-point number, which represents the number of cores most used by the container, which can be accurate to two decimal places, that is, the container can use a minimum of 0.01core CPU. For example, we can limit containers to 1. 5-kernel CPU.
If the-- cpus value set is greater than the number of CPU cores of the host, docker will report an error directly.
If multiple containers are set with-cpus, and their sum exceeds the CPU core number of the host, it will not cause the container to fail or exit. These containers will compete to use CPU. The specific number of CPU allocated depends on the operation of the host and the CPU share value of the container. In other words, cpus can only guarantee the maximum number of CPU that the container can use when the CPU resources are sufficient. Docker cannot guarantee that the container can use so much CPU in any case (because this is impossible).
CPU specifies the core
When Docker allows scheduling, it defines which CPU the container is running on. You can use the-- cpuset-cpus parameter to make the container run only on one or more cores.
-- cpuset-cpus,-cpus parameters can be used with-c,-- cpu-shares. The container can only run on certain CPU cores, and the usage is configured.
It is not a good idea to limit which cores the container runs on, because it requires knowing in advance how many CPU cores are on the host and is very inflexible. Unless there is a special need, this is generally not recommended in production.
Other CPU parameter option description-cpu-period int specifies the period of CFS scheduling, which is generally used with-- cpu-quota. By default, the period is 1 second, expressed in microseconds, and the default value is generally used. Version 1.13 or later is recommended to use the-cpus flag instead. -- cpu-quota int containers a cycle cpu time quota in CFS scheduling, that is, the cpu time (microseconds) and cpu-quota/cpu-period available for each-- cpu-period cycle container. Version 1.13 or later is recommended to use the-cpus flag instead. Pressure testing
Demonstration of resource restrictions
Query the resources on the host
The lscpu and free commands are used here:
[root@Docker ~] # lscpuArchitecture: x86_64CPU op-mode (s): 32-bit 64-bitByte Order: Little EndianCPU (s): 1On-line CPU (s) list: 0Thread (s) per core: 1Core (s) per socket: 1Socket (s): 1NUMA node (s): 1Vendor ID: GenuineIntelCPU family: 6Model: 60Model name: Intel (R) Core (TM) I7-4790K CPU @ 4.00GHzStepping: 3CPU MHz: 3999.996BogoMIPS: 7999.99Hypervisor vendor: MicrosoftVirtualization type: fullL1d cache: 32KL1i cache: 32KL2 cache: 256KL3 cache: 8192KNUMA node0 CPU (s): 0Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov Pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm ssbd ibrs ibpb stibp fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt spec_ctrl intel_stibp flush_ l1d [root @ Docker ~] # free-h total used free shared buff/cache availableMem: 936M 260M 340M 6.7m 334m 592MSwap: 1.6G 0B 1.6G [root@Docker] # download image
You can search for stress (stress test) on docker hub.
Download the image and run the View help:
[root@Docker] # docker pull lorel/docker-stress-ng [root@Docker] # docker run-it-- rm lorel/docker-stress-ngstress-ng, version 0.03.11Usage: stress-ng [OPTION [ARG]]-- h,-- help show help. Omit .Example: stress-ng-- cpu 8-- io 4-- vm 2-- vm-bytes 128M-- fork 4-- timeout 10sNote: Sizes can be suffixed with BMagi K and times with, M and times with, M,
Main command parameters:
-- h,-- help: the default startup container is this command parameter-CN,-- cpu N: start N sub-processes to stress test CPU-mn,-- vm N: start N processes to test memory-- vm-bytes N: how much memory is used by each child process (default 256MB) to test memory limit
Check the description of memory-related parameters in lorel/docker-stress-ng:
-mn,-- vm N start N workers spinning on anonymous mmap-- vm-bytes N allocate N bytes per vm worker (default 256MB)
By default, each worker is the memory of 256MB, which remains the default. Then specify-- vm, open 2 worker, and limit the memory of the container to 256MB. Start the container:
[root@Docker] # docker run-- name stress1-it-- rm-m 256m lorel/docker-stress-ng-- vm 2stress-ng: info: [1] defaulting to a 86400 second run per stressorstress-ng: info: [1] dispatching hogs: 2 vm
This terminal is already occupied, and another terminal uses the docker top command to view the running processes inside the container:
[root@Docker ~] # docker top stress1UID PID PPID C STIME TTY TIME CMDroot 5922 5907 0 21:06 pts/0 00:00:00 / usr/bin/stress-ng-- vm 2root 6044 5922 0 21:06 pts/0 00:00:00 / usr/bin/stress-ng-- vm 2root 6045 5922 0 21:06 pts/0 00:00:00 / usr/bin/stress-ng-- vm 2root 6086 6044 13 21:06 pts/0 00:00:00 / usr/bin/stress-ng-vm 2root 6097 6045 47 21:06 pts/0 00:00:00 / usr/bin/stress-ng-vm 2 [root@Docker ~] #
Here you can take a look at PID and PPID, where there are five processes, a parent process creates two child processes, and each of these two child processes creates a process.
You can also use the command docker stats to view the real-time usage of the container's resources:
$docker statsCONTAINER ID NAME CPU% MEM USAGE / LIMIT MEM% NET I BLOCK O PIDS626f38c4a4ad stress1 18.23% 256MiB / 256MiB 100.00B / 0B 17.7MB / 9.42GB 5
This is refreshed in real time.
Test CPU limits
Limit the container to a maximum of 2 cores, and then open 8 CPU for stress testing at the same time, use the following command:
Docker run-it-rm-cpus 2 lorel/docker-stress-ng-cpu 8
Limit the use of only 0.5 cores and enable 4 CPU for stress testing:
[root@Docker] # docker run-- name stress2-it-- rm-- cpus 0.5 lorel/docker-stress-ng-- cpu 4stress-ng: info: [1] defaulting to a 86400 second run per stressorstress-ng: info: [1] dispatching hogs: 4 cpu
Another terminal uses the docker top command to view the running processes inside the container:
[root@Docker ~] # docker top stress2UID PID PPID C STIME TTY TIME CMDroot 7198 7184 0 22:35 pts/0 00:00:00 / usr/bin/stress-ng-cpu 4root 7230 7198 12 22:35 pts/0 00:00:02 / usr/bin/stress-ng-cpu 4root 7231 7198 12 22:35 pts/0 00:00:02 / usr/bin/stress-ng-- cpu 4root 7232 7198 12 22:35 pts/0 00:00:02 / usr/bin/stress-ng-cpu 4root 7233 7198 12 22:35 pts/0 00:00:02 / usr/bin/stress-ng-cpu 4 [root@Docker ~] #
One parent process, creating 4 child processes.
Then use the docker stats command to view the resource occupancy:
$docker statsCONTAINER ID NAME CPU% MEM USAGE / LIMIT MEM% NET I BLOCK O PIDS14a341dd23d1 stress2 50.02% 13.75MiB / 908.2MiB 1.51% 656B / 0B 0B / 0B 5
Because 0.5 cores are limited, it will not exceed 50%.
Test CPU Share
Open three containers and specify different-- cpu-shares parameters. If not, the default is 1024:
[root@Docker] # docker run-- name stress3.1-itd-- rm-- cpu-shares 512lorel/docker-stress-ng-- cpu 4800d756f76ca4cf20af9fa726349f25e29bc57028e3a1cb738906a68a87dcec4 [root@Docker ~] # docker run-- name stress3.2-itd-- rm lorel/docker-stress-ng-- cpu 44b88007191812b239592373f7de837c25f795877d314ae57943b5410074c6049 [root@Docker ~] # docker run-- name stress3.3-itd-- rm-- cpu-shares 2048 lorel/docker-stress-ng-- cpu 48f103395b6ac93d337594fdd1db289b6462e01c3a208dcd3788332458ec03b98 [root@Docker ~] #
Check the CPU occupancy of the 3 containers:
$docker statsCONTAINER ID NAME CPU% MEM USAGE / LIMIT MEM% NET I BLOCK O PIDS800d756f76ca stress3.1 14.18% 14.53MiB / 908.2MiB 1.60% 656B / 0B 0B / 0B 54b8800719181 Stress3.2 28.60% 15.78MiB / 908.2MiB 1.74% 656B / 0B 0B / 0B 58f103395b6ac stress3.3 56.84% 15.38MiB / 908.2MiB 1.69% 656B / 0B 0B / 0B 5
The occupancy rate is basically 1-2-4, in line with expectations.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.