Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How Docker limits the memory available to the container

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

By default, the resources used by the container are unrestricted. That is, the maximum resources allowed by the host kernel scheduler can be used. However, in the process of using the container, it is often necessary to limit the host resources that the container can use. This article describes how to limit the host memory that the container can use.

Why restrict the use of memory by containers?

It is important to limit the container's ability to use the host's memory too much. For linux hosts, once the kernel detects that there is not enough memory to allocate, it throws OOME (Out Of Memmory Exception) and starts killing some processes to free up memory space. The bad thing is that any process can be hunted by the kernel, including docker daemon and other important programs. What is more dangerous is that if an important process that supports the running of the system is killed, the whole system will be down! Here we consider a common scenario in which a large number of containers deplete the memory of the host, and the system kernel begins to kill the process to release memory immediately after the OOME is triggered. What if the first process the kernel kills is docker daemon? The result is that all the containers are not working, which is unacceptable!

To address this problem, docker attempts to mitigate it by adjusting the OOM priority of docker daemon. The kernel scores all processes when choosing which processes to kill, directly killing the process with the highest score, followed by the next. When the OOM priority of the docker daemon is lowered (note that the OOM priority of the container process has not been adjusted), the score of the docker daemon process will be lower not only than that of the container process, but also lower than that of some other processes. This makes the docker daemon process much more secure.

We can take a visual look at the scores of all the processes in the current system through the following script:

#! / bin/bashfor proc in $(find / proc-maxdepth 1-regex'/ proc/ [0-9] +'); do printf "% 2d% 5d% s\ n"\ "$(cat $proc/oom_score)"\ "$(basename $proc)"\ "$(cat $proc/cmdline | tr'\ 0' | head-c 50)" done 2 > / dev/null | sort-nr | head-n 40

This script outputs the 40 processes with the highest scores and sorts them:

The first column shows the score of the process, with mysqld ranking first. All those shown as node server.js are container processes, which are generally ranked at the top. In the red box is the docker daemon process, which is very low, all behind the sshd.

With the above mechanism, can you rest easy? No, docker's official documentation has always emphasized that this is just a mitigation solution, and provides us with some suggestions to reduce the risk:

Master the memory requirements of the application by testing to ensure that the host running the container has a reorganized memory limit the memory that can be used by the container to configure swap for the host

Well, after talking so much, it actually means that by limiting the upper limit of memory used by the container, you can reduce the various risks when the host runs out of memory.

Stress testing tool stress

In order to test the memory usage of the container, the author installed the stress test work stress in the image of ubuntu and created a new image u-stress. All containers used in this article are created by u-stress image (the host of the container running in this article is CentOS7). The following is the Dockerfile that creates the u-stress image:

FROM ubuntu:latestRUN apt-get update & &\ apt-get install stress

The command to create an image is:

$docker build-t u-stress:latest.

Limit memory usage limit

Before going into the tedious setup details, let's complete a simple use case: limit the maximum memory that the container can use to 300m.

The-m (--memory=) option can be configured as follows:

$docker run-it-m 300m-- memory-swap-1-- name con1 u-stress / bin/bash

The following stress command creates a process and allocates memory through the malloc function:

# stress-- vm 1-- vm-bytes 500m

View the actual situation through the docker stats command:

The upper limit of memory used by the container is limited to 300m with the-m option in the docker run command above. At the same time, set the memory-swap value to-1, which indicates that the container program is limited to use memory, while the swap space that can be used is unlimited (the host can use as many swap containers as it can).

Let's look at the actual situation of stress process memory through the top command:

In the screenshot above, we first query the processes related to the stress command through the pgrep command. The larger process number is the process used to consume memory, so we will check its memory information. VIRT is the size of the process's virtual memory, so it should be 500m. RES is the actual amount of physical memory allocated, which we see fluctuating around 300m. It seems that we have succeeded in limiting the amount of physical memory that the container can use.

Limit the available swap size

Emphasize that memory-swap must be used with-memory.

Normally, the value of-- memory-swap contains the container's available memory and the available swap. So-- memory= "300m"-- memory-swap= "1g" means:

The container can use 300m of physical memory and can use 700m (1G-330m) swap. -- memory-swap is the sum of the physical memory that the container can use and the swap that can be used!

Setting-- memory-swap to 0 is the same as not setting it. If-- memory is set, the container can use twice the swap size of-- memory.

If the value of-- memory-swap is the same as-- memory, the container cannot use swap. The following demo demonstrates a scenario in which a large amount of memory is requested from the system when no swap is available:

$docker run-it-- rm-m 300m-- memory-swap=300M u-stress / bin/bash# stress-- vm 1-- vm-bytes 500m

The physical memory of the container in demo is limited to 300m, but the process wants to apply for 500m of physical memory. When there is no swap available, the process is directly OOM kill. If there is enough swap, the program can at least run normally.

We can forcibly prevent OOM kill from happening with the-oom-kill-disable option, but I think OOM kill is a healthy behavior, so why stop it?

In addition to limiting the size of the available swap, you can also set the urgency for the container to use swap, which is the same as the host's swappiness. The container inherits the host's swappiness by default. If you want to explicitly set the swappiness value for the container, you can use the-- memory-swappiness option.

Summary

By limiting the physical memory available to the container, you can avoid consuming a lot of CVM memory due to service exceptions in the container (it is a better strategy to restart the container at this time), thus reducing the risk of CVM memory depletion.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report