Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the linux limit on CPU usage time?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

The main content of this article is to explain "what are the restrictions on the use of CPU by linux". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "what is the linux limit on the use of CPU?"

For some CPU-intensive programs, it is not only necessary to obtain more CPU usage time, but also to reduce the context switching caused by workload throttling. In today's multi-core systems, each core has its own cache. If frequent scheduling processes are executed on different cores, it is bound to bring overhead such as cache invalidation. So is there a way to isolate the CPU core? To be exact, bind the running process to the specified core to run. Although all programs are created equal to the operating system, some programs are more equal than others.

For those more equal programs, we need to allocate more CPU resources to it, after all, people are very biased. To cut the crap, let's take a look at how to use cgroup to restrict processes from using a specified CPU core.

1. View CPU configuration

The numbering of CPU cores usually starts at 0, and the numbering range of the four cores is 0-3. We can determine some information about CPU by looking at the contents of / proc/cpuinfo:

$cat / proc/cpuinfo...processor: 3vendor_id: GenuineIntelcpu family: 6model: 26model name: Intel (R) Xeon (R) CPU X5650 @ 2.67GHzstepping: 4microcode: 0x1fcpu MHz: 2666.761cache size: 12288 KBphysical id: 6siblings: 1core id: 0cpu cores: 1apicid: 6initial apicid: 6fpu : yesfpu_exception: yescpuid level: 11wp: yesflags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni ssse3 cx16 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer hypervisor lahf_lm ssbd ibrs ibpb stibp tsc_adjust arat spec_ctrl intel_stibp flush_l1d arch_capabilitiesbogomips: 5333.52clflush size: 64cache_alignment: 64address sizes: 43 bits physical 48 bits virtual

Processor: represents the number of the core, but this is not the core of the physical CPU and is more accurately called the * logical core number.

Physical id: indicates the core of the physical CPU where the current logical core is located, also numbered starting with 0, which means that the logical core is on the 7th physical CPU.

Core id: if this value is greater than 0, you should be aware that your server may have hyperthreading turned on. If hyperthreading is enabled, each physical CPU core simulates two threads, also known as logical cores (different from the logical core above, but with the same name). If you want to confirm whether the server has hyperthreading enabled, you can check it with the following command:

$cat / proc/cpuinfo | grep-e "core id"-e "physical id" physical id: 0core id: 0physical id: 2core id: 0physical id: 4core id: 0physical id: 6core id: 0

If the same processor for both physical id and core id occurs twice, you can conclude that hyperthreading is turned on. Apparently my server is not turned on.

2. NUMA architecture

A concept called NUMA (Non-uniform memory access), a non-uniform memory access architecture, needs to be involved here. If there are multiple CPU plugged into the motherboard, it is the NUMA architecture. Each CPU occupies an area and generally has an independent fan.

A NUMA node contains CPU, memory and other hardware devices directly connected in this area, and the communication bus is usually PCI-E. This also introduces the concept of CPU affinity, that is, CPU accesses memory on the same NUMA node faster than it accesses another node.

You can view the native NUMA schema with the following command:

$numactl-- hardwareavailable: 1 nodes (0) node 0 cpus: 0 12 3node 0 size: 2047 MBnode 0 free: 1335 MBnode distances:node 0: 10

You can see that the server does not use the NUMA architecture, and there is only one NUMA node in total, that is, only one piece of CPU,4 with logical cores on this CPU.

3. Isolcpus

One of the most important duties of Linux is to schedule the process, which is just an abstraction of the running process of the program. It executes a series of instructions that the computer will follow to complete the actual work. From a hardware point of view, the real execution of these instructions is the central processing unit, namely CPU. By default, the process scheduler may schedule processes to any CPU core because it balances the allocation of computing resources according to the load.

To increase the obvious effect of the experiment, you can isolate some logical cores so that the system will never use them by default, unless I specify some processes to use them. To do this, you need to use the kernel parameter isolcpus. For example, if you want the system not to use logical cores 2, 3 and 4 by default, you can add the following to the kernel parameter list:

Isolcpus=1,2,3# or isolcpus=1-3

For CnetOS 7, you can modify / etc/default/grub directly:

$cat / etc/default/grubGRUB_TIMEOUT=5GRUB_DISTRIBUTOR= "$(sed's, release. * $, g' / etc/system-release)" GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT= "console" GRUB_CMDLINE_LINUX= "crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet isolcpus=1,2,3" GRUB_DISABLE_RECOVERY= "true"

Then rebuild the grub.conf:

$grub2-mkconfig-o / boot/grub2/grub.cfg

After rebooting the system, the system will no longer use logical cores 2, 3 and 4, but only core 1. Find a program to run CPU full (the program used in the previous article) and use the command top to check the usage of CPU:

After executing the top command, press the number 1 on the list page and you can see all the CPU.

You can see that only core 1 is used in the system, so let's take a look at how to tie the program to a specific CPU core.

4. Create cgroup

Binding the program to the specified core is actually very simple, as long as you set up the cpuset controller. Systemctl can manage cgroup controllers under its control, but can only manage limited controllers (CPU, memory, and BlockIO), not cpuset controllers. Although systemd does not support cpuset, I believe it will in the future. In addition, there is a slightly clumsy way to achieve the same goal, which will be described later.

All operations related to cgroup are based on cgroup virtual filesystem in the kernel, and it's easy to use cgroup, just mount the file system. File systems are mounted to the / sys/fs/cgroup directory by default. Take a look at this directory:

Total amount of ll / sys/fs/cgroup 0drwxr-xr-x 2 root root 0 March 28 2020 blkiolrwxrwxrwx 1 root root 11 March 28 2020 cpu-> cpu,cpuacctlrwxrwxrwx 1 root root 11 March 28 2020 cpuacct-> cpu,cpuacctdrwxr-xr-x 2 root root 0 March 28 2020 cpu Cpuacctdrwxr-xr-x 2 root root 0 March 28 2020 cpusetdrwxr-xr-x 4 root root 0 March 28 2020 devicesdrwxr-xr-x 2 root root 0 March 28 2020 freezerdrwxr-xr-x 2 root root 0 March 28 2020 hugetlbdrwxr-xr-x 2 root root 0 March 28 2020 memorylrwxrwxrwx 1 root root 16 March 28 2020 net_cls-> net_cls,net_priodrwxr-xr-x 2 root root 0 March 28 2020 net_cls,net_priolrwxrwxrwx 1 root root 16 March 28 2020 net_prio-> net_cls Net_priodrwxr-xr-x 2 root root 0 March 28 2020 perf_eventdrwxr-xr-x 2 root root 0 March 28 2020 pidsdrwxr-xr-x 4 root root 0 March 28 2020 systemd

You can see that the cpuset controller has been created and mounted by default. Take a look at what's in the cpuset directory:

The total amount of $ll / sys/fs/cgroup/cpuset 0Mar 28 2020 cgroup.clone_children--w--w--w- 1 root root 0 March 28 2020 cgroup.event_control-rw-r--r-- 1 root root 0 March 28 2020 cgroup.procs-r--r--r-- 1 root root 0 March 28 2020 cgroup.sane_behavior-rw-r--r-- 1 root root 0 3 Month 28 2020 cpuset.cpu_exclusive-rw-r--r-- 1 root root 0 March 28 2020 cpuset.cpus-r--r--r-- 1 root root 0 March 28 2020 cpuset.effective_cpus-r--r--r-- 1 root root 0 March 28 2020 cpuset.effective_mems-rw-r--r-- 1 root root 0 March 28 2020 cpuset.mem_exclusive-rw-r--r-- 1 root root 0 March 28 2020 cpuset .mem _ hardwall-rw-r--r-- 1 root root 0 March 28 2020 cpuset.memory_migrate-r--r--r-- 1 root root 0 March 28 2020 cpuset.memory_pressure-rw-r--r-- 1 root root 0 March 28 2020 cpuset.memory_pressure_enabled-rw-r--r-- 1 root root 0 March 28 2020 cpuset.memory_spread_page-rw-r--r-- 1 root root 0 March 28 2020 cpuset .memory _ spread_slab-rw-r--r-- 1 root root 0 March 28 2020 cpuset.mems-rw-r--r-- 1 root root 0 March 28 2020 cpuset.sched_load_balance-rw-r--r-- 1 root root 0 March 28 2020 cpuset.sched_relax_domain_level-rw-r--r-- 1 root root 0 March 28 2020 notify_on_release-rw-r--r-- 1 root root 0 March 28 2020 Release_agent-rw-r--r-- 1 root root 0 March 28 2020 tasks

There is only the default configuration in this directory and there are no cgroup subsystems. Next, let's create the cpuset subsystem and set the corresponding binding parameters:

$mkdir-p / sys/fs/cgroup/cpuset/test$ echo "3" > / sys/fs/cgroup/cpuset/test/cpuset.cpus$ echo "0" > / sys/fs/cgroup/cpuset/test/cpuset.mems

First you create a cpuset subsystem called test, and then you tie core 4 to that subsystem, cpu3. For the cpuset.mems parameter, each memory node corresponds to the NUMA node one by one. If the memory requirement of the process is large, all the NUMA nodes can be configured. The concept of NUMA is used here. For performance reasons, the configured logical core and memory node generally belong to the same NUMA node, and their mapping relationship can be learned by the numactl-- hardware command. Obviously, my host doesn't use the NUMA architecture, just set it to node 0.

View the test directory:

The total amount of $cd / sys/fs/cgroup/cpuset/test$ ll 0 RW cd-1 root root 0 March 28 17:07 cgroup.clone_children--w--w---- 1 root root 0 March 28 17:07 cgroup.event_control-rw-rw-r-- 1 root root 0 March 28 17:07 cgroup.procs-rw-rw-r-- 1 root root 0 March 28 17:07 cpuset.cpu_exclusive-rw-rw-r- -1 root root 0 March 28 17:07 cpuset.cpus-r--r--r-- 1 root root 0 March 28 17:07 cpuset.effective_cpus-r--r--r-- 1 root root 0 March 28 17:07 cpuset.effective_mems-rw-rw-r-- 1 root root 0 March 28 17:07 cpuset.mem_exclusive-rw-rw-r-- 1 root root 0 March 28 17:07 cpuset.mem_hardwall-rw-rw- Cpuset.memory_migrate-r--r--r---1 root root 0 March 28 17:07 cpuset.memory_migrate-r--r--r-- 1 root root 0 March 28 17:07 cpuset.memory_pressure-rw-rw-r-- 1 root root 0 March 28 17:07 cpuset.memory_spread_page-rw-rw-r-- 1 root root 0 March 28 17:07 cpuset.memory_spread_slab-rw-rw-r-- 1 root root 0 March 28 17:07 cpuset.mems- Rw-rw-r-- 1 root root 0 March 28 17:07 cpuset.sched_load_balance-rw-rw-r-- 1 root root 0 March 28 17:07 cpuset.sched_relax_domain_level-rw-rw-r-- 1 root root 0 March 28 17:07 notify_on_release-rw-rw-r-- 1 root root 0 March 28 17:07 tasks$ cat cpuset.cpus3 $cat cpuset.mems0

The tasks file is currently empty, that is, there are no processes running on the cpuset subsystem. You need to find a way to get the specified process to run on the subsystem. There are two ways:

Write the PID of the running process to the tasks file

Create a daemon using systemd and write the settings of cgroup to the service file (essentially the same as method 1).

Let's take a look at method 1. First, run a program:

$nohup sha1sum / dev/zero & [1] 3767

Then write PID to tasks in the test directory:

$echo "3767" > / sys/fs/cgroup/cpuset/test/tasks

View CPU usage:

You can see that the kernel binding is in effect, and the process with a PID of 3767 is scheduled to the cpu3.

Let's take a look at method 2. Although systemd does not support the use of cpuset to specify a CPU of Service, we still have a disguised method. The contents of the Service file are as follows:

$cat / etc/systemd/system/ foo.service[ Unit] Description=fooAfter=syslog.target network.target auditd.service[ Service] ExecStartPre=/usr/bin/mkdir-p / sys/fs/cgroup/cpuset/testsetExecStartPre=/bin/bash-c'/ usr/bin/echo "2" > / sys/fs/cgroup/cpuset/testset/cpuset.cpus'ExecStartPre=/bin/bash-c'/ usr/bin/echo "0" > / sys/fs/cgroup/cpuset/testset/cpuset.mems'ExecStart=/bin/bash-c "/ Usr/bin/sha1sum / dev/zero "ExecStartPost=/bin/bash-c'/ usr/bin/echo $MAINPID > / sys/fs/cgroup/cpuset/testset/tasks'ExecStopPost=/usr/bin/rmdir / sys/fs/cgroup/cpuset/testsetRestart=on- failure [install] WantedBy=multi-user.target

Start the service and view CPU usage:

The processes in this service are indeed dispatched to cpu2.

5. Back to Docker

Finally, we go back to Docker,Docker, which is actually integrating cgroup, namespace and other technologies implemented at the bottom of the system into a tool published by mirroring, so Docker is formed. I will not expand this as we all know. For Docker, is there a way for the container to run on one or more CPU all the time? In fact, it is very simple, you only need to use the-- cpuset-cpus parameter to do it!

Let's demonstrate that the CPU core number of the running container is 1:

→ docker run-d-name stress-cpuset-cpus= "1" progrium/stress-c 4

View the load of the host CPU:

Only the Cpu1 reaches 100%, and other CPU is not used by the container.

If you have read the first article in this series, you should know that in new systems that use systemd to implement init (such as ConetOS 7), the system creates three top-level slice:System, User and Machine by default, where machine.slice is the default location for all virtual machines and Linux containers, and Docker is actually a variant of machine.slice, which you can think of as machine.slice.

If Kubernetes,machine.slice is running on the system, it becomes kubepods:

For ease of administration, cgroup,systemd creates a subsystem for each slice, such as the docker subsystem:

Then, according to the settings of the container, put it under the appropriate controller. Here we are concerned about the cpuset controller and see what is in its directory:

View the docker directory:

You can see that Docker creates a subdirectory for each container, 7766. It corresponds to the container we created earlier:

→ docker ps | grep stress7766580dd0d7 progrium/stress "/ usr/bin/stress-v …" 36 minutes ago Up 36 minutes stress

Let's verify the configuration in this directory:

$cd / sys/fs/cgroup/cpuset/docker/7766580dd0d7d9728f3b603ed470b04d0cac1dd923f7a142fec614b12a4ba3be$ cat cpuset.cpus1 $cat cpuset.mems0 $cat tasks65366562656365646565 $ps-ef | grep stressroot 65366 520 10:08? 00:00:00 / usr/bin/stress-- verbose-c 4root 6562 6536 24 10:08? 00:09:50 / usr/bin/stress-- verbose-c 4root 6563 6536 24 10:08? 00:09:50 / usr/bin/stress-- verbose-c 4root 6564 6536 24 10:08? 00:09:50 / usr/bin/stress-- verbose-c 4root 6565 6536 24 10:08-00:09:50 / usr/bin/stress-- verbose-c 4 here I believe that you have a deeper understanding of "what is the time limit of linux to the use of CPU", you might as well do it in practice! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report