Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use cgroup in Docker

2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to use cgroup in Docker, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

What is cgroup?

Linux CGroup (Linux Contral Group), which is actually a function of the Linux kernel, is a mechanism for managing processes by groups under Linux. Originally initiated by Google engineers Paul Menage and Rohit Seth in 2006, it was originally named process Container. After 2007, as the container was proposed, to avoid confusion, it was renamed cgroup and was incorporated into kernel version 2.6.24.

In the view of the user layer, cgroup technology is to organize all the processes in the system into an independent tree, each tree contains all the processes of the system, each node of the tree is a process group, and each tree is associated with one or more subsystem. Trees are mainly used to group processes, while subsystem is used to manipulate these groups.

The composition of cgroup

Cgroup mainly consists of the following two parts

Subsystem: a subsystem is a kernel module that performs specific operations on the tree node after it is associated with a cgroup tree. Subsystem is often called "resource controller" because it is mainly used to schedule or restrict the resources of each process group, but this is not entirely accurate, because sometimes we group processes just to do some monitoring and observe their status, such as perf_event subsystem.

Hierarchy: a hierarchy can be understood as a cgroup tree. Each node of the tree is a process group, and each tree is associated with multiple subsystem. In a tree, it will contain all the processes in the Linux system, but each process can only belong to one node (process group). There can be many cgroup trees in the system, and each tree is associated with a different subsystem. A process can belong to multiple trees, that is, a process can belong to multiple process groups, which are associated with different subsystem.

You can see which subsystem associations are supported by the current system by viewing the / proc/cgroup directory

First column: represents the subsystem name

The second column: represents the ID of the associated cgroup tree. If multiple subsystem are associated to the same cgroup tree, then their field will be the same. Examples include cpuset, cpu, and cpuacct in the figure.

The third column represents the number of process groups in the cgroup tree associated with subsystem, that is, the number of nodes on the tree.

Features provided by cgroup

It provides the following functions

Resource limitation: resource usage restrictions

Prioritization: priority control

Accounting: some audits or statistics

Control: suspend the process and resume the execution process

Generally speaking, we can do the following things with cgroup

Isolate a set of processes (such as all processes in MySQL) and limit the resources they consume, such as the core limits of the binding

Allocate memory for this group of processes

Allocate sufficient bandwidth and storage restrictions for this set of processes

Restrict access to certain devices

Cgroup appears as a file system in Linux, running the following command

After mount succeeds, you can see that there is a cgroup directory under / sys/fs, which has many subsystems. Such as cpu, cpuset, blkio and so on.

Then create a subdirectory test under the / sys/fs/cgroup/cpu directory, and you will find that there are a lot of files in that directory.

Restrict CPU in cgroup

In cgroup, the subsystems related to CPU are cpusets, cpuacct and cpu.

Cpuset is mainly used to set the affinity of CPU. Processes in cgroup can only run on the specified CPU or not on the specified CPU. At the same time, cpuset can also set the affinity of memory. Cpuacct contains statistics about the CPU currently used by cgroup. Here we will only talk about the following cpu.

Then we create a child group under / sys/fs/cgroup/cpu, which is a list of files

Cpu.cfs_period_us is used to configure the length of time period, cpu.cfs_quota_us is used to configure the number of CPU times that can be used by the current cgroup within the set period length, and the two files work together to set the upper limit of CPU usage. The units of both files are microseconds (us). The value of cpu.cfs_period_us ranges from 1 millisecond (ms) to 1 second (s), and the value of cpu.cfs_quota_us is greater than 1ms.

Here's an example of how to use cpu restrictions

If we write an endless cycle,

When running, check with top that the occupancy rate has reached 100%.

We execute the following command to set up cfs_quota_us

Echo 20000 > / sys/fs/cgroup/cpu/test/cpu.cfs_quota_us

This command reduces the CPU utilization of the process by 20%, and then adds the process PID to the cgroup

If you execute top, you can see that the utilization of cpu has decreased.

Limit memory in cgroup

If the code has bug, such as memory leaks, it will drain the system memory and cause other programs to have exceptions due to not allocating enough memory. If the system is configured with swap partitions, it will cause the system to use a lot of swap partitions, so that the system runs very slowly.

The main control of process memory by cgroup is as follows:

Limit the total amount of memory used by all processes in cgroup

Limit the total amount of physical content + swap exchanges used by all processes in cgroup

Limit the amount of kernel memory and other kernel resources (CONFIG_MEMCG_KMEM) that can be used by all processes in cgroup.

To limit kernel memory here is to limit the kernel resources currently used by cgroup, including the kernel space occupied by the current process, the memory space occupied by socket, and so on. When memory is tight, you can prevent the current cgroup from continuing the creation process and requesting more kernel resources from the kernel.

Here is an example to show you how cgroup does memory control.

# include # define CHUNK_SIZE 512int main () {int size = 0; char* p = nullptr; while (1) {if ((p = (char*) malloc (CHUNK_SIZE)) = = nullptr) {break;} memset (p, 0, CHUNK_SIZE); printf ("[% u]-[% d] MB is allocated", getpid (), + + size) Sleep (1);} return 0;}

First, create a subdirectory under / sys/fs/cgroup/memory to create a subcgroup. For example, here we create a test directory.

$mkdir / sys/fs/cgroup/memory/test

The test directory contains the following files

An overview of the role of each file is given:

File description cgroup.event_control interface for eventfd memory.usage_in_bytes displays current used memory memory.limit_in_bytes setting / shows current limit of memory memory.failcnt shows the number of times memory usage reaches the limit memory.max_usage_in_bytes historical memory maximum usage memory.soft_limit_in_bytes setting / shows current limit memory soft limit memory.stat displays current cgroup Memory usage memory.use_hierarchy setting / shows whether to count the memory usage of the sub-cgroup to the current cgroup memory.force_empty triggers the system to immediately reclaim the recyclable memory in the current cgroup as much as possible memory.pressure_level setting memory pressure notification event Use memory.swappiness Settings and display current swappinessmemory.move_charge_at_immigrate Settings with cgroup.event_control whether the memory occupied by a process when it moves to another cgroup also moves past memory.oom_control Settings / Show oom controls related configurations memory.numa_stat displays numa related memory

Then set the limit by writing the file memory.limit_in_bytes. The limit of 5m is set here, as shown in the following figure

Add the above sample process to the cgroup, as shown in the following figure

To avoid being affected by the swap space, set swappiness to 0 to prohibit the current cgroup from using swap, as shown in the following figure

When the physical memory limit is reached, the default behavior of the system is to kill the process of requesting memory in cgroup. So how to control this behavior? That is to configure memory.oom_control. This file contains an ID that controls whether the OOM-killer is started for the current cgroup. If you write 0 to this file, OOM-killer will be started, and when the kernel cannot allocate enough memory to the process, it will directly kill the process. If you write 1 to this file, it will not start OOM-killer. When the kernel cannot allocate enough memory to the process, it will pause the process until there is spare memory before continuing to run. At the same time, memory.oom_control also contains a read-only under_oom field to indicate whether the current oom state has been entered, that is, whether any process has been paused. There is also a read-only killed_oom field that indicates whether any process has been dropped by kill.

Limit the number of processes in cgoup

There is a subsystem in cgroup called pids, which limits the total number of task that can be created in cgroup and all its descendants cgroup. The task here refers to the process created by the fork and clone functions, and since the clone function can also create threads, the task here also contains threads.

Previously, the cgroup tree has been mounted, so create a child cgroup directly here, and name it test. The command is shown in the following figure

Let's look at the files in the test directory.

Where pids.current represents the total number of processes available to the current cgroup and all its grandchildren, cgroup.

The maximum number of processes allowed to be created by pids.max 's current cgroup and all its grandson cgroup.

Let's do an experiment and set pids.max to 1.

Then add the current bash process to the cgroup

If you run any command, the creation process fails because pids.current is already equal to pids.max in the current window

The pids.current and pids.max in the current cgroup represent all the processes of the current cgroup and all descendant cgroup, so the size of the pids.max in the descendant cgroup cannot exceed the size in the parent cgroup. What if it exceeds that? We set pids.max to 3

Current number of processes is 2

Reopen a shell window, create a grandson cgroup, and set the pids.max to 5

Write the bash process of the current shell to croup.procs

If you go back to the original shell window and execute any command you can see that the execution failed.

As you can see, the number of processes in a child cgroup is restricted not only by its own pids.max, but also by the pids.max of its ancestor cgroup.

The above is all the contents of the article "how to use cgroup in Docker". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report