Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Cgroup limits CPU, IO, memory and scheduling methods in linux systems

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "Cgroup limits CPU, IO, memory and scheduling methods in linux systems". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "Cgroup limits scheduling methods in CPU, IO, memory and linux systems".

1. CPU

1. 0 nuclear utilization

Idea: core 0 must be protected, otherwise there may be problems with various system commands and feeding dogs.

The idea is to bind an instance, such as SD, to the 0 core and limit the CPU load through the cgroup function, such as using up to 80% of the 0 core CPU

Problem: cgroup has been cut and needs to be installed; need to analyze and test whether cgroup will affect the scheduling order

2. Other cores

Idea: other instances, bound to all cores except core 0 and dpdk core, that is, allow migration on these cores, scheduled by the operating system

Problem: according to previous experience, operating system scheduling for core migration is not quick, and it is easy for CPU to rush to the top and cause call loss at high load.

V4 usage:

Service cgconfig start # enable cgroups service

Chkconfig cgconfig on # enable startup

V5 usage

Systemctl start cgconfig.service

Systemctl enable cgconfig.service

Systemctl is-enable cgconfig.service

V5 system install cgroup, install libcgroup-0.41-13.el7.x86_64.rpm / / v5 system has been installed

Libcgroup-devel-0.41-13.el7.x86_64.rpm

Libcgroup-tools-0.41-13.el7.x86_64.rpm

Https://segmentfault.com/a/1190000008323952

Cgroup.procs: a read-only file that displays all processes under the current cgroup

CFS completely Fair scheduling Strategy

Cpu.cfs_period_us: used to configure the length of time period (in us). The value range is 1000~1000000:1ms ~ 1s.

Cpu.cfs_quota_us: used to configure the number of CPU hours (in us) that can be used by the current cgroup within the set cycle length, with a minimum of 1000 CPU 1ms; t > 1000

Total CPU occupancy of all tasks in tasks = cpu.cfs_quota_us/cpu.cfs_period_us

Only when these two files are valid at the same time, which means that cfs restriction is enabled, the process of cfs scheduling policy can be written into tasks, otherwise the parameter is invalid echo: write error: Invalid argument

RT real-time scheduling strategy

Cpu.rt_period_us: count the period of CPU usage time (in us). The minimum value is 1, t > 1.

Cpu.rt_runtime_us: the time during which a task is allowed to use a single CPU core in a cycle. If there are multiple cores in the system, multiple cores can be used, in us. The minimum value is 0.

Total CPU occupancy for all tasks in tasks = cpu.rt_runtime_us*CPU cores / cpu.rt_period_us

Only when these two files are valid at the same time, indicating that the rt restriction is enabled, can the process of the rt scheduling policy be written into the tasks.

If both RT and CFS are configured effectively, A (sum of threads of type RT) and B (sum of threads of type CFS) work respectively

Restrict the corresponding thread properties in tasks

Assuming that both RT and CFS set the CPU occupancy rate to 80%, then An and B both occupy 80%

The two add up to 160%. If the threads in tasks share a core, then A + B = 100%

As for the specific values of An and B, since An is the priority of RT scheduling to preempt CPU, it is usually 80% of CPU.

Both scheduling policies take effect by configuring cgconfig.conf.

Cpu.shares

Cpu.stat

Notify_on_release

Release_agent

Tasks

-rw-r--r-- 1 root root 0 Aug 12 17:09 cpu.cfs_period_us

-rw-r--r-- 1 root root 0 Aug 12 17:09 cpu.cfs_quota_us

-rw-r--r-- 1 root root 0 Aug 12 17:09 cpu.rt_period_us

-rw-r--r-- 1 root root 0 Aug 12 17:09 cpu.rt_runtime_us

V4 environment:

Service sbcd restart = = do not exist / cgroup/cpu/ path or service cgconfig restart failure will cause the veneer to restart

V5 environment:

When the system

Remaining questions:

1. How to configure the cgroup,v5 system to boot, but cannot start normally

A: it has been resolved. To modify cgconfig.conf, you need to restart the veneer once to take effect.

2. Granularity selection of cpu.rt_period_us and cpu.rt_runtime_us parameters for real-time scheduling.

The granularity is too large, the intuitive feeling takes up too much cpu, and the response of the system command becomes slow.

Answer: write a test program (two scheduling methods)

A. Observe the time required to perform the same operation (1w, 10w, 100w cycles) at different granularity

B. Observe the scheduling times of the program

4. The dispatching mode has changed.

Background: it is found that the scheduling mode of threads under SD will be modified on machines 5 (mspe) and 9 (mpe).

The scheduling strategy of threads in Q1:sbc, what is the decision

Reason: sd inherits the shell process and writes the task to / sys/fs/cgroup/cpu/user.slice by default

Rt=0 in user.slice packet failed to set thread scheduling attribute in SD (sched_setscheduler API returned-1)

Solution: 1. Cut or customize user.slice and system.slice, and test user.slice.rt + system.slice.rt

< 90%可行 进一步论证:user.slice.rt + system.slice.rt +test.rt < 100 2、在代码中实现,先进行写文件设置,后进行调度属性设置,实验可行 5、通过配置cgconfig.conf ,/cgconfig.d/test.conf,可以解决调度策略发生改变问题 Q1:systemctl restart cgconfig.service 命令失败 测试问题收尾: 1、实时调度的cpu.rt_period_us 、cpu.rt_runtime_us参数粒度选取; 粒度太大,直观感觉占用cpu过大,系统命令响应变慢 解决方案:写一个测试程序(cfs调度,系统命令也cfs调度方式) 1、观察程序的调度次数 sbc占用80%,linux_endless占用20% 粒度小cpu.rt_period_us=100000 cpu.rt_runtime_us=20000 粒度大cpu.rt_period_us=1000000 cpu.rt_runtime_us=200000 相同时间: 粒度大sbc调用次数 >

Number of sbc calls with small granularity

Number of linux_endless calls with large granularity

< 粒度小linux_endless调用次数 因此,选择粒度小的cpu参数,要优先保证操作系统调度 #总核数 = 物理CPU个数 * 每个物理CPU的核数 #总逻辑CPU数 = 总核数 * 超线程数 #查看物理CPU个数 cat /proc/cpuinfo | grep "physical id" |sort | uniq |wc -l #查看每个物理CPU中的core的个数 cat /proc/cpuinfo | grep "cpu cores" | uniq #查看逻辑CPU个数 cat /proc/cpuinfo | grep "processor" | wc -l #查看超线程是否打开判断 cat /proc/cpuinfo | grep "sibling" | uniq ,cat /proc/cpuinfo | grep "cpu cores" | uniq 如果"siblings"和"cpu cores"一致,则说明不支持超线程,或者超线程未打开; 如果"siblings"是"cpu cores"的两倍,则说明支持超线程,并且超线程已打开 #查看CPU是32还是64位运行模式 getconf LONG_BIT 执行结果:64 注意:如果结果是32,代表是运行在32位模式下,但不代表CPU不支持64bit。4.CPU是32还是64位运行模式 注意:如果结果是32,代表是运行在32位模式下,但不代表CPU不支持64bit。 一般情况:逻辑CPU的个数 = 物理CPU个数 * 每个cpu的核数。如果不相等的话,则表示服务器的CPU支持超线程技术 1、物理CPU:实际Server中插槽上的CPU个数 物理cpu数量,可以数不重复的 physical id 有几个:cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l 2、cpu核数:一块CPU上面能处理数据的芯片组的数量、比如现在的i5 760,是双核心四线程的CPU、而 i5 2250 是四核心四线程的CPU cpu核数查看方法:cat /proc/cpuinfo | grep "cpu cores" | uniq 3、逻辑CPU:/proc/cpuinfo 这个文件是用来存储cpu硬件信息的(信息内容分别列出了processor 0 - n 的规格。而这里的n是逻辑cpu数量) 一个cpu可以有多核,加上intel的超线程技术(HT), 可以在逻辑上再分一倍数量的cpu core出来,所以: cat /proc/cpuinfo| grep "processor"| wc -l 逻辑CPU数量 = 物理cpu数量 * cpu cores 这个规格值 * 2(如果支持并开启ht) 注意:Linux下top查看的CPU也是逻辑CPU个数 查看每个逻辑cpu当前的占用率 top ,然后按下数字 1; top -H -p xxxx //看xxx进程下的线程CPU占用率 perf top -Cx // 看具体函数的在CPUx占用率 绑定进程到CPU核上:taskset -cp cpu_id pid 查看进程位于哪个cpu核上:taskset -p pid 查看进程的调度策略:ps -eo class,cmd TS SCHED_OTHER FF SCHED_FIFO RR SCHED_RR B SCHED_BATCH ISO SCHED_ISO systemctl list-unit-files | grep enabled //v5系统执行服务命令systemctl 查看挂载cgroup lssubsys -am 手工挂载cpu mount -t cgroup -o cpu,cpuacct cpu /cgroup/cpu [root@localhost cgroup]# systemctl status cgconfig.service ● cgconfig.service - Control Group configuration service Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2018-08-09 10:22:51 UTC; 19min ago Process: 4076 ExecStart=/usr/sbin/cgconfigparser -l /etc/cgconfig.conf -L /etc/cgconfig.d -s 1664 (code=exited, status=101) Main PID: 4076 (code=exited, status=101) Aug 09 10:22:51 localhost.localdomain systemd[1]: Starting Control Group configuration service... Aug 09 10:22:51 localhost.localdomain cgconfigparser[4076]: /usr/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed Aug 09 10:22:51 localhost.localdomain systemd[1]: cgconfig.service: main process exited, code=exited, status=101/n/a Aug 09 10:22:51 localhost.localdomain cgconfigparser[4076]: Error: cannot mount cpu,cpuacct to /cgroup/cpu: Device or resource busy Aug 09 10:22:51 localhost.localdomain systemd[1]: Failed to start Control Group configuration service. Aug 09 10:22:51 localhost.localdomain systemd[1]: Unit cgconfig.service entered failed state. Aug 09 10:22:51 localhost.localdomain systemd[1]: cgconfig.service failed. [root@localhost cgroup]# cgclear ps -ef | grep sleep # 找出 sleep 1000 的pid, 这里假设是 1234 chrt -p 1234 # 可以查看 pid=1234 的进程的 调度策略, 输入如下: pid 1234's current scheduling policy: SCHED_OTHER pid 1234's current scheduling priority: 0 chrt -p -f 10 1234 # 修改调度策略为 SCHED_FIFO, 并且优先级为10 chrt -p 1234 # 再次查看调度策略 pid 1234's current scheduling policy: SCHED_FIFO pid 1234's current scheduling priority: 10 chrt -p -r 10 4572 # 修改调度策略为 SCHED_RR, 并且优先级为10 chrt -p 4572 pid 4572's current scheduling policy: SCHED_RR pid 4572 的当前调度优先级:10 chrt -p -o 0 1234 在修改为SCHED_OTHER 输入chrt 列出可选参数 获取系统实时进程调度配置 sysctl -n kernel.sched_rt_period_us #实时进程调度的单位CPU时间 1 秒 sysctl -n kernel.sched_rt_runtime_us #实时进程在 1 秒中实际占用的CPU时间, 0.95秒 设置实时进程占用CPU时间 上面的默认设置中, 实时进程占用 95% 的CPU时间. 如果觉得占用的太多或太少, 都是可以调整的.比如: sysctl -w kernel.sched_rt_runtime_us=900000 # 设置实时进程每1秒中只占0.9秒的CPU时间 kernel.sched_rt_runtime_us = 900000 sysctl -n kernel.sched_rt_runtime_us 900000 cgroup 中的设置 整体设置是针对整个系统的, 我们也可以通过 cgroup 来对一组进程的CPU资源进行控制. 如果想在 cgroup 中对 sched_rt_period_us 和 sched_rt_runtime_us 进行控制, 需要内核编译选项 CONFIG_RT_GROUP_SCHED=y cat /boot/config-`uname -r` 查看 CONFIG_RT_GROUP_SCHED 是否启用 /*其他模块*/ 1、blkio模块、相关参数及其含义: 1.1. 权重比例 blkio.weight 设定默认使用的权重比例,取值范围:100-1000。此值会被blkio.weight_device值覆盖。 echo 500 >

Blkio.weight

Blkio.weight_device

Sets the weight ratio for the use of the specified device, with a value range of 100-1000. This value overrides the blkio.weight setting. The format of the value is: major:minor weight, that is, the primary device number: secondary device number weight. For example: set the access weight of hard disk sda to 500.

Ps:

# ll / dev/sda

Brw-rw---- 1 root disk 8, 0 Aug 15 15:42 / dev/sda

The main setting number is 8 and the secondary setting number is 0.

Echo 8:0 500 > blkio.weight_device / / Test found that this device failed to write with a non-zero

Echo 3 > / proc/sys/vm/drop_caches / / it is important to clear the file system cache, otherwise you may see that the read rate of io is 0

Cgexec-g "blkio:test1" dd bs=1M count=4096 if=file1 of=/dev/null

Cgexec-g "blkio:test2" dd bs=1M count=4096 if=file2 of=/dev/null / / Note: the reading device cannot be the same, otherwise the rate difference cannot be seen.

1.2. Upper limit of use of Icano

Blkio.throttle.read_bps_device / blkio.throttle.write_bps_device

Specifies the upper limit of bytes per second for a device in cgroup to read / write data. Its format is major:minor bytes_per_second.

Blkio.throttle.read_iops_device / blkio.throttle.write_iops_device

Specifies the maximum number of read / write requests a device can perform per second in cgroup. Its format is major:minor operations_per_second.

Test:

1. Echo '8:0 1000000' > blkio.throttle.read_bps_device / / set packet read data to 1M/s

2. Echo 3 > / proc/sys/vm/drop_caches / / clear the file system cache, which is very important

3. Dd if=/dev/sda of=/dev/null &

4. Iotop-p cmd2_pid / / View the disk reading rate of the process

5. Add cmd2_pid to task grouping to check the rate change

/ / C code only needs to call the system interface read to read the disk device.

1.3. Statistical parameter

Blkio.reset_stats: writes an integer to this file and resets the statistical count in the cgroup.

Blkio.time: count the access time of cgroup to specific devices. Unit is millisecond (ms)

Blkio.sectors: count the number of sector reads and writes of cgroup to specific devices.

Blkio.io_serviced: count the number of read and write operations of cgroup to specific devices. The content has four fields: major, minor,operation (read, write, sync, or async) and number (representing the number of operations).

Blkio.io_service_bytes: count the number of bytes read and written by cgroup to specific devices. The content has four fields: major, minor, operation (read, write, sync, or async) and bytes (representing the number of bytes transferred).

Blkio.io_service_time: counts the time between sending a request to a specified device by cgroup and completing the request. The entry has four fields: major, minor, operation and time, where time is in nanoseconds (ns).

Blkio.io_wait_time: statistics of the waiting time of cgroup's I _ max O operation on specific devices in queue scheduling. The content has four fields: major,minor, operation and time, where time is in nanoseconds (ns), which means it also makes sense for ssd hard drives.

Blkio.io_merged: counts the number of times cgroup merges BIOS requests into IZP O operation requests. The content has two fields: number and operation.

Blkio.io_queued: count the number of requests queued for IWeiO operation. The content has two fields: number and operation (read, write, sync, or async).

Blkio.throttle.io_serviced: count the number of read and write operations of cgroup to specific devices. The difference between blkio.io_serviced and blkio.throttle.io_serviced is that when CFQ schedules the request queue, the former is not updated.

The content has four fields: (read, write, sync, or async) and number (representing the number of operations).

Blkio.throttle.io_service_bytes: count the number of bytes read and written by cgroup to specific devices. The difference between blkio.io_service_bytes and blkio.throttle.io_service_bytes is that when CFQ schedules the request queue, the former is not updated. The content has four fields: (read, write, sync, or async) and bytes (representing the number of bytes transferred).

2. Memory module, related parameters and their meanings:

2.1. Summary of parameters:

Cgroup.event_control # Interface for eventfd

Memory.usage_in_bytes # shows the memory currently used

Memory.limit_in_bytes # sets / displays the amount of memory currently limited

Memory.failcnt # shows the number of times memory usage reaches the limit

Memory.max_usage_in_bytes # maximum historical memory usage

Memory.soft_limit_in_bytes # sets / displays the current limit of memory soft limit

Memory.stat # shows the current memory usage of cgroup

Memory.use_hierarchy # sets / displays whether to count the memory usage of the sub-cgroup into the current cgroup

Memory.force_empty # triggers the system to immediately reclaim as much memory as possible in the current cgroup

Memory.pressure_level # sets the notification event of memory pressure, which is used with cgroup.event_control

Memory.swappiness # sets and displays the current swappiness

Memory.move_charge_at_immigrate # sets whether the memory occupied by a process moves with it when it is moved to another cgroup

Memory.oom_control # set / display oom controls related configuration

Memory.numa_stat # displays numa-related memory

2.2, attribute restrictions

Memory.force_empty: when writing 0 to the memory.force_empty file (echo 0 > memory.force_empty), the system will immediately trigger the system to reclaim as much memory as possible from the cgroup. This feature is mainly used in scenarios where the cgroup is removed (there are no processes in the cgroup). Execute this command to recover the memory occupied by the cgropu as much as possible, so that the data occupied by the memory can be migrated to the parent cgroup or root cgroup faster.

Memory.swappiness: the default value of this file is the same as the global swappiness (/ proc/sys/vm/swappiness). Modifying this file only takes effect on the current cgroup, and its function is the same as the global swappiness.

One difference from the global swappiness is that if the file is set to 0, even if the system is configured with swap space, the current cgroup will not use swap space.

Memory.use_hierarchy: when the content of the file is 0, no inheritance is used, that is, there is no relationship between the parent and child cgroup; when the content of the file is 1, the memory occupied by the child cgroup will be counted in all ancestral cgroup.

If the content of the file is 1, when a cgroup runs out of memory, it will trigger the system to reclaim the memory of it and all its descendants cgroup.

Note: when there is a child cgroup under the cgroup or the parent cgroup has set the file to 1, then the file in the current cgroup cannot be modified.

Memory.soft_limit_in_bytes: why do you need soft limit when you have hard limit (memory.limit_in_bytes)? Hard limit is a rigid standard, must not exceed this value, and soft limit can be exceeded, since it can be exceeded, what is the use of this configuration? First take a look at its characteristics. When the system memory is abundant, soft limit does not work. When the system memory is tight, the system will try to limit the memory of cgroup to below the soft limit value (the kernel will try to do so, but not 100% guarantee).

From its characteristics, we can see that its role mainly occurs when the system memory is tight, if there is no soft limit, then all the cgroup compete for memory resources, and the cgroup that takes up more memory will not let the cgroup that takes up less memory, so there will be some cgroup memory hunger. If soft limit is configured, when the system memory is tight, the system will let the cgroup that exceeds soft limit release more memory than soft limit (possibly more), so that other cgroup will have more opportunities to allocate memory. So, this is actually a compromise mechanism when the system is out of memory, setting soft limit for less important processes, and giving up opportunities to other important processes when the system is running out of memory.

Note: when the system memory is tight and the cgroup reaches soft limit, in order to control the memory usage of the current cgroup under soft limit, the system will trigger the memory recovery operation when it receives the current cgroup new memory allocation request, so once it reaches this state, it will frequently trigger the memory recovery operation of the current cgroup, which will seriously affect the performance of the current cgroup.

Memory.oom_control: oom behavior control after memory limit is exceeded.

Check the oom killer settings. You cannot edit them with vi, you can only use echo, but the memory.oom_control under the root path cannot be set.

[root@localhost test] # echo 1 > memory.oom_control

[root@localhost test] # cat memory.oom_control

Oom_kill_disable 1

Under_oom 0 / / read-only field

Turn off oom killer:

Set oom_kill_disable to 1. (0 is on) when oom is triggered, but when the switch is turned off, the corresponding thread is still unable to schedule, resulting in a D state

Cgroup.event_control: a notification that implements OOM. When an OOM occurs, you can receive related events.

Memory.move_charge_at_immigrate: when a process moves from one cgroup to another cgroup, by default, the memory already occupied by the process is still counted in the original cgroup and will not occupy the quota of the new cgroup, but the newly allocated memory will be counted in the new cgroup (including the swap out to swap space and then swap in to the physical memory). We can set memory.move_charge_at_immigrate so that the memory occupied by the process is migrated to the new cgroup as the process migrates.

Methods:

Enable: echo 1 > memory.move_charge_at_immigrate

Disable:echo 0 > memory.move_charge_at_immigrate

Note: a, even if set to 1, but if it is not the leader of thread group, the memory occupied by this task cannot be migrated. In other words, if you migrate on a thread-by-thread basis, it must be the first thread of a process, and if you migrate on a process-by-process basis, there is no problem.

When memory.move_charge_at_immigrate is 0, even if the processes in the current cgroup have been moved to other cgropu, since the memory already occupied by the process has not been counted, the current cgroup may still take up a lot of memory. When the cgroup is removed, who needs to account for the memory occupied? The answer is to rely on the value of memory.use_hierarchy. If the value is 0, it will be counted in root cgroup; if the value is 1, it will be counted in its parent cgroup.

B. When the current process is grouped, if the memory used by the process (memory.usage_in_bytes) is greater than the memory limit of the target group (memory.limit_in_bytes), the migration fails

[root@localhost memory] # echo 5750 > test_1/tasks

[root@localhost memory] # cat test_1/memory.usage_in_bytes

106496

[root@localhost memory] # cat test_2/memory.usage_in_bytes

0

[root@localhost memory] # cd test_2

[root@localhost test_2] # cat memory.limit_in_bytes

9223372036854771712

[root@localhost test_2] # echo 10240 > memory.limit_in_bytes

[root@localhost test_2] # echo 5750 > tasks

-bash: echo: write error: unable to allocate memory

[root@localhost test_2] # echo 1024000 > memory.limit_in_bytes

[root@localhost test_2] # echo 5750 > tasks

[root@localhost test_2] # cat memory.usage_in_bytes

106496

C. When the process is applying for memory, the packet is migrated, and the memory statistics will not be migrated.

2.3. Memory limit

Memory.memsw.limit_in_bytes: the total limit of memory + swap space usage.

Memory.limit_in_bytes: memory usage limit.

Memory.memsw.limit_in_bytes must be greater than or equal to memory.limit_in_byte.

To remove the memory limit, set the corresponding value to-1.

There is a risk in limiting the process memory footprint in this way. When a process tries to consume more memory than the limit, oom will be triggered, causing the process to be killed directly, resulting in availability problems. Even if the oom killer of the control group is turned off, the process will not be killed when there is insufficient memory, but it will enter the D state for a long time (waiting for the uninterruptible sleep of the system call) and be placed in the OOM-waitqueue waiting queue, still causing the service to be unavailable. Therefore, using memory.limit_in_bytes or memory.memsw.limit_in_bytes to limit the process memory footprint should only be used as an insurance to avoid exhausting system resources when the process is abnormal. For example, if a set of processes is expected to consume up to 1G of memory, it can be set to 1.5G. In this way, more serious problems can be avoided in the event of anomalies such as memory leaks.

Note: if the task task already exists under the current grouping, the value of the modified memory.limit_in_bytes must be greater than the value of memory.usage_in_bytes, otherwise the modification will fail.

[root@localhost test_2] # cat memory.usage_in_bytes

106496

[root@localhost test_2] # echo 106494 > memory.limit_in_bytes

-bash: echo: write error: device or resource busy

[root@localhost test_2] # echo 106498 > memory.limit_in_bytes

[root@localhost test_2] # cat memory.limit_in_bytes

106496 / / numeric values may not be written accurately

[root@localhost test_2] #

2.4. Memory resource audit

Memory.memsw.usage_in_bytes: current cgroup memory + swap usage.

Memory.usage_in_bytes: the current memory usage of cgroup.

Memory.max_usage_in_bytes:cgroup 's maximum memory + swap usage.

The maximum memory usage of the memory.memsw.max_usage_in_bytes:cgroup.

Thank you for reading, the above is the content of "Cgroup limits CPU, IO, memory and scheduling methods in linux systems". After the study of this article, I believe you have a deeper understanding of Cgroup restrictions on CPU, IO, memory and scheduling methods in linux systems, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report