In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how Docker limits the memory available in containers. I hope you will get something after reading this article. Let's discuss it together.
Why restrict the use of memory by containers?
It is important to limit the container's ability to use the host's memory too much. For linux hosts, once the kernel detects that there is not enough memory to allocate, it throws oome (out of memmory exception) and starts killing some processes to free up memory space. The bad thing is that any process can be hunted by the kernel, including docker daemon and other important programs. What is more dangerous is that if an important process that supports the running of the system is killed, the whole system will be down! Here we consider a common scenario in which a large number of containers deplete the memory of the host, and the system kernel begins to kill the process to release memory immediately after the oome is triggered. What if the first process the kernel kills is docker daemon? The result is that all the containers are not working, which is unacceptable!
To address this problem, docker attempts to mitigate it by adjusting the oom priority of docker daemon. The kernel scores all processes when choosing which processes to kill, directly killing the process with the highest score, followed by the next. When the oom priority of the docker daemon is lowered (note that the oom priority of the container process has not been adjusted), the score of the docker daemon process will be lower not only than that of the container process, but also lower than that of some other processes. This makes the docker daemon process much more secure.
We can take a visual look at the scores of all the processes in the current system through the following script:
?
1 2$ docker run-it-- rm-m 300m-- memory-swap=300m u-stress / bin/bash# stress-- vm 1-- vm-bytes 500m
The physical memory of the container in demo is limited to 300m, but the process wants to apply for 500m of physical memory. When there is no swap available, the process is directly oom kill. If there is enough swap, the program can at least run normally.
We can forcibly prevent oom kill from happening with the-oom-kill-disable option, but I think oom kill is a healthy behavior, so why stop it?
In addition to limiting the size of the available swap, you can also set the urgency for the container to use swap, which is the same as the host's swappiness. The container inherits the host's swappiness by default. If you want to explicitly set the swappiness value for the container, you can use the-- memory-swappiness option.
After reading this article, I believe you have a certain understanding of "how Docker limits the memory available in containers". If you want to know more about it, please follow the industry information channel. Thank you for reading!
Original link: http://www.cnblogs.com/sparkdev/p/8032330.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.