In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "what is the logic of Linux process scheduling". In daily operations, I believe many people have doubts about what the logic of Linux process scheduling is. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts of "what is the logic of Linux process scheduling?" Next, please follow the editor to study!
Pick_next_task (): select a process from the ready queue
When selecting processes for scheduling, the kernel will first determine whether there are any processes on the current CPU that can be scheduled. If not, execute the process migration logic to migrate processes from other CPU, and if so, select processes with less virtual time for scheduling.
When the kernel selects the logical CPU for the migration process, in order to improve the performance of the migrated process, that is, to avoid the invalidation of the L1 L2 L3 cache after migration, migrate the target logical CPU that shares the cache with the current logical CPU as close to the current logical CPU as possible.
The kernel abstracts the process into a scheduling entity in order to schedule a batch of processes uniformly and ensure fairness at each scheduling level.
The so-called high-priority process is actually selected for the process with less virtual time, and the virtual time of the process is dynamically calculated according to the actual priority and running time of the process.
Context_switch (): perform context switch
Process context switching, the core is to switch virtual memory and some general registers.
For a process to switch virtual memory, it is necessary to switch the ASID and page table in the corresponding TLB, which is the "map" needed for virtual memory translation of different processes.
In the data structure of the process, there is an indirect field, cpu_context, which stores the value of the general register. The essence of register switching is to save the register of the previous process to the cpu_context field, and then load the fields in the cpu_context data structure of the next process into the register.
1 the theoretical level of operating system
Students who have studied the operating system should know that process scheduling is divided into the following two steps:
Select a process from the ready queue according to some algorithm.
Perform a process context switch.
The second step can be divided into:
Toggles virtual memory.
Switch registers, that is, save the registers of the previous process to the data structure of the process, and load the data structure of the next process into the register.
The logic related to virtual memory will be analyzed in detail in subsequent articles, which will be briefly summarized in this article.
In this article, we analyze how Linux implements the core logic of process scheduling from the perspective of kernel source code, which basically follows the theory of operating system.
2 scheduling main function
The main functions of Linux process scheduling are schedule () and _ _ schedule (). The relationship between the two can be seen from the source code:
/ / kernel/sched/core.c:3522void schedule (void) {. / / preemption of preempt_disable (); _ _ schedule (false) is prohibited during scheduling; / /: 3529 / / you can preempt sched_preempt_enable_no_resched ();.} / / kernel/sched/core.c:3395__schedule (bool preempt) {.}
When a process actively gives up CPU, such as yield system call, it executes the schedule () method, which forbids other processes to preempt the current process during the process scheduling. to put it bluntly, let the current process complete this process switch well, and do not deprive it of its CPU.
The 3529 line of code indicates that schedule () called the _ _ schedule (false) method, passing the fasle parameter, indicating that this is an active scheduling of the process and cannot be preempted.
After the current process has finished executing the scheduling logic, preemption is turned on, that is, other processes can deprive the current process of its CPU.
If a process keeps holding onto the CPU like a robber, the kernel will schedule the current process through a preemptive mechanism (such as the cycle scheduling mechanism mentioned in the previous article) to kick the current process out of the CPU.
The framework of the _ _ schedule () method is the main content of this article, and since there is a lot of code, I will choose the core to describe it.
Overview of Core Logic of 3 _ _ schedule () method
Let's first take a look at the framework of the process scheduling core function _ _ schedule () in the Linux kernel:
/ / kernel/sched/core.c:3395void _ _ schedule (bool preempt) {struct task_struct * prev, * next; unsigned long * switch_count; struct rq * rq; int cpu; / / get the current CPU cpu = smp_processor_id (); / / get the process queue rq = cpu_rq (cpu) on the current CPU; / / get the processes running on the current queue prev = rq- > curr . / / nivcsw: number of inactive switches of a process switch_count = & prev- > nivcsw; / /: 3430 if (! preempt...) {. / / nvcsw: number of active switches of a process switch_count = & prev- > nvcsw / /: 3456}. / / 1 selects a process next = pick_next_task (rq, prev, & rf) from the ready queue according to some algorithm. If (prev! = next) {rq- > nr_switches++; rq- > curr = next; + + * switch_count; / / 2 perform process context switching rq = context_switch (rq, prev, next, & rf);}.}
As you can see, in the _ _ schedule () method, the core steps of process switching are consistent with the operating system theory (the two core steps 1 and 2).
In addition, in the process of process switching, the kernel will count the number of process context switches according to whether the scheduling is initiated actively (preempt is fasle) or passively, and are stored in the data structure task_struct of the process:
/ / include/linux/sched.h:592struct task_struct {. / / active handover: Number of Voluntary (voluntary) Context Switches unsigned long nvcsw; /: 811 / / non-active handover: Number of InVoluntary (involuntary) Context Switches unsigned long nivcsw; / /: 812.}
In Linux, we can use the pidstat command to see the number of active and passive context switches of a process. Let's write a simple c program to do a test:
/ / test.c#include # include int main () {while (1) {/ / take the initiative to hibernate every other second / / that is to actively give up CPU / / theoretically will actively switch over sleep (1)} every second
Then compile and run
Gcc test.c-o test./test
View via pidstat
As you can see, the test application actively switches the process context once a second, which is consistent with our expectations, and the corresponding context switching data is obtained from task_struct.
Next, the two core steps of process scheduling are analyzed in detail:
Select a process from the ready queue through pick_next_task ().
Context switching is performed through context_switch.
4 pick_next_task (): select a process from the ready queue
Let's review the position of pick_next_task () in the _ _ schedule () method
/ / kernel/sched/core.c:3395void _ schedule (bool preempt) {. / / rq is the process queue next = pick_next_task (rq, pre.) on the current cpu; / /: 3459.}
Follow the call chain to explore:
/ / kernel/sched/core.c:3316/* * find the highest priority process * Pick up the highest-prio task: * / struct task_struct * pick_next_task (rq, pre...) {struct task_struct * p;. P = fair_sched_class.pick_next_task (rq, prev...); / /: 3331. If (! P) p = idle_sched_class.pick_next_task (rq, prev...); / /: 3337 return p;}
It can also be seen from the comments of the pick_next_task () method that the purpose of this method is to find the process with the highest priority. Since most of the scheduling types of processes in the system are fair scheduling, we analyze the logic related to fair scheduling.
As you can see from the above core framework, line 3331 first tries to get a process from a queue of fair scheduling type, and line 3337, if not found, takes out the IDLE process on each CPU and runs it:
/ / kernel/sched/idle.c:442const struct sched_class idle_sched_class = {... .pick _ next_task = pick_next_task_idle, / /: 451...}; / / kernel/sched/idle.c:385struct task_struct * pick_next_task_idle (struct rq * rq...) {. / / there is an IDLE process return rq- > idle in each CPU runtime queue / /: 383}
Next, we focus on the fair scheduling class process selection algorithm fair_sched_class.pick_next_task ()
/ / kernel/sched/fair.c:10506const struct sched_class fair_sched_class = {... .pick _ next_task = pick_next_task_fair, / /: 10515...} / / kernel/sched/fair.c:6898static struct task_struct * pick_next_task_fair (struct rq * rq...) {/ / cfs_rq is the fair scheduling queue struct cfs_rq * cfs_rq = & rq- > cfs on the current CPU Struct sched_entity * se; struct task_struct * p; int new_tasks;again: / / 1 if there are no processes to schedule in the current CPU process queue, execute the load balancing logic if (! cfs_rq- > nr_running) goto idle; / / 2. Currently, there are processes in the CPU process queue that can be scheduled. Select a highly optimized process p do {struct sched_entity * curr = cfs_rq- > curr;. Se = pick_next_entity (cfs_rq, curr); cfs_rq = group_cfs_rq (se);} while (cfs_rq); p = task_of (se);... idle: / / through the load balancing migration process new_tasks = idle_balance (rq, rf); / /: 7017. If (new_tasks > 0) goto again; return NULL;}
The logic of pick_next_task_fair () is relatively complex, but the core idea is divided into two steps:
If there are no processes to schedule on the current CPU, the load logic is executed to migrate the processes from other CPU
If there are processes on the current CPU that can be scheduled, select a high-priority process from the queue, the so-called high-priority process, that is, the process with the least virtual time
Next, let's disassemble the above steps in two steps.
4.1 load balancing logic
In order to balance the load of each CPU, the kernel migrates the process from the busy CPU to the idle CPU when the CPU is idle. Of course, if the process sets the affinity of the CPU, that is, the process can only run on certain CPU, then the process cannot be migrated.
The core logic of load balancing is the idle_balance method:
/ / kernel/sched/fair.c:9851static int idle_balance (struct rq * this_rq...) {int this_cpu = this_rq- > cpu; struct sched_domain * sd; int pulled_task = 0;. For_each_domain (this_cpu, sd) {/ /: 9897. Pulled_task = load_balance (this_cpu...);... If (pulled_task...) / /: 9912 break;}. Return pulled_task;}
The logic of the idle_balance () method is also relatively complex: but in general, it iterates through all the scheduling domains of the current CPU until it is migrated out of the process location.
Here is a core concept: sched_domain, that is, the scheduling domain. Here is a picture of what a scheduling domain is.
The kernel divides the processor into two NUMA nodes according to the distance between the processor and the main memory, and each node has two processors. NUMA refers to inconsistent access, processors in each NUMA node access memory nodes at different speeds, and L1 L2 L3 Cache is not shared among different NUMA nodes.
There are two processors under each NUMA node, and different processors under the same NUMA share L3 Cache.
There are two CPU cores under each processor, and different cores under the same processor share L2 L3 Cache.
There are two hyperthreads under each core, and different hyperthreads of the same core share L1 L2 L3 Cache.
In the application, what we get through the system API is hyperthreading, which can also be called logical CPU, which is collectively referred to as logical CPU.
When a process accesses the data of an address, it will first look for it in L1 Cache. If it is not found, it will look for it in L2 Cache, and then it will look for it on L3 Cache, and at last it will access main memory. In terms of access speed, L1 > L2 > L3 > main memory, the goal of kernel migration process is to make the migrated process hit the cache as much as possible.
The kernel abstracts the concept of scheduling domain according to the part framed by dotted lines in the figure above. The closer to the upper layer, the larger the scope of the scheduling domain and the greater the probability of cache invalidation. Therefore, one of the goals of the migration process is to, as far as possible, get transferable processes in the low-level scheduling domain.
The 9897 line of the above code idle_balance () method: for_each_domain (this_cpu, sd), this_cpu is the logical CPU (that is, the lowest hyperthreading concept), and sd is the scheduling domain. The logic of this line of code is to expand the scheduling domain layer by layer.
/ / kernel/sched/sched.h:1268#define for_each_domain (cpu, _ _ sd)\ for (_ _ sd = cpu_rq (cpu)-> sd);\ _ _ sd; _ sd = _ _ sd- > parent)
The 9812 line of the idle_balance () method terminates the loop if the process is successfully migrated out of the process to the current logical CPU in a scheduling domain, which shows that the kernel takes great pains to improve the performance of the application.
After load balancing, there is likely to be a ready process in the current idle logical CPU process queue, so next get the most appropriate process from this queue.
4.2 Select High priority process
Next, we focus on how to select a high-priority process, and in the fair scheduling class, a virtual time is calculated based on the actual priority and running time of the process. The less the virtual time, the higher the probability of being selected. So it's called fair scheduling.
The following is the core logic for selecting a high-priority process:
/ / kernel/sched/fair.c:6898static struct task_struct * pick_next_task_fair (struct rq * rq...) {/ / cfs_rq is the fair scheduling queue struct cfs_rq * cfs_rq = & rq- > cfs; struct sched_entity * se; struct task_struct * p; / / 2. Currently, there are processes in the CPU process queue that can be scheduled. Select a highly optimized process p do {struct sched_entity * curr = cfs_rq- > curr;. Se = pick_next_entity (cfs_rq, curr); cfs_rq = group_cfs_rq (se);} while (cfs_rq); / / get the process p = task_of (se) of the scheduled entity package; / /: 6956. Return p;}
The kernel provides the concept of a scheduling entity, and the corresponding data structure is called sched_entity. The kernel actually schedules according to the scheduling entity:
/ / include/linux/sched.h:447struct sched_entity {. / / the parent of the current scheduling entity struct sched_entity * parent; / / the queue where the current scheduling entity resides struct cfs_rq * cfs_rq; / /: 468 / / the queue owned by the current scheduling entity, and the child scheduling entity queue / / process is the underlying entity and does not own the queue struct cfs_rq * my_q ...}
Each process corresponds to a scheduling entity, and several scheduling entities can be bound together to form a higher-level scheduling entity, so there is a recursive effect. The logic of the above do while loop is to start from the fair scheduling entity at the top of the current logical CPU (cfs_rq- > curr), and select the scheduling entity with less virtual time layer by layer until the last scheduling entity is a process.
The reason for this is that the kernel wants to ensure that scheduling is fair at every level possible.
For example, in the case of a Docker container, there are several processes running in a Docker container. These processes belong to the same scheduling entity and belong to the same level as the scheduling entity of the host process. Therefore, if the fork process is crazy in the Docker container, the kernel will calculate the virtual time sum of these processes to make a fair choice with other host processes. These processes can't occupy the CPU all the time!
The logic for selecting the process with the least virtual time is se = pick_next_entity (cfs_rq, curr);, and the corresponding logic is as follows:
/ / kernel/sched/fair.c:4102struct sched_entity * pick_next_entity (cfs_rq * cfs_rq, sched_entity * curr) {/ / the scheduling entity with the minimum virtual time in the fair operation queue struct sched_entity * left = _ _ pick_first_entity (cfs_rq); struct sched_entity * se / / if the minimum virtual time process in the tree is not found or is not as small as the current scheduling entity, select the current entity if (! left | | (curr & & entity_before (curr, left) left = curr; se = left; return se } / / kernel/sched/fair.c:489int entity_before (struct sched_entity * a, struct sched_entity * b) {/ / compare the virtual time return (S64) (a-> vruntime-b-> vruntime)
< 0;} 上述代码,我们可以分析出,pick_next_entity() 方法会在当前公平调度队列 cfs_rq 中选择最靠左的调度实体,最靠左的调度实体的虚拟时间越小,即最优。 而下面通过 __pick_first_entity() 方法,我们了解到,公平调度队列 cfs_rq 中的调度实体被组织为一棵红黑树,这棵树的最左侧节点即为最小节点: // kernel/sched/fair.c:565struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq) { struct rb_node *left = rb_first_cached(&cfs_rq->Tasks_timeline); if (! left) return NULL; return rb_entry (left, struct sched_entity, run_node);} / / include/linux/rbtree.h:91// caches the leftmost node of the red-black tree # define rb_first_cached (root) (root)-> rb_leftmost
Through the above analysis, we still analyze through an example of Docker: there are two common processes in a host: a Docker container, which contains C1, c2, and c3 processes.
In this case, there are two levels of scheduling entities in the system, the highest level is A, B, c1+c2+c3, and then C1, c2, c3. Let's discuss the logic of process selection on a case-by-case basis:
1) if the virtual time distribution is as follows: Aburex 100s, Bjorn, 200s, c1m, 50s, 50s, 100s, 3s, 80s.
Select logic: first compare the virtual time of A, B, and c1+c2+c3, and find that An is the smallest, because An is already a process, select A, if An is less than the virtual time of the current running process, the next running process is A, otherwise the current process remains unchanged.
2) if the virtual time distribution is as follows: Alav 100s, Bjorn, 200s, c1m, 50s, 50s, 30s, 30s, 3s, 10s.
Select logic: first compare the virtual time of A, B, and c1+c2+c3, and find that the c1+c2+c3 is the smallest. Because the selected scheduling entity is not a process, but a group of processes, continue to select the next scheduling entity, compare the virtual time of C1, c2, and c3, and find that the virtual time of c3 is the smallest. If the virtual time of c3 is less than the virtual time of the current process, the next running process is c3, otherwise the current process remains unchanged.
At this point, the logic of selecting a high-priority process for scheduling is over, and let's make a summary.
4.3 pick_next_task () Summary
When selecting processes for scheduling, the kernel will first determine whether there are any processes on the current CPU that can be scheduled. If not, execute the process migration logic to migrate processes from other CPU. If so, select processes with less virtual time for scheduling.
When the kernel selects the logical CPU for the migration process, in order to improve the performance of the migrated process, that is, to avoid the invalidation of the L1 L2 L3 cache after migration, migrate the target logical CPU that shares the cache with the current logical CPU as close to the current logical CPU as possible.
The kernel abstracts the process into a scheduling entity in order to schedule a batch of processes uniformly and ensure fairness at each scheduling level.
The so-called high-priority process is actually selected for the process with less virtual time, and the virtual time of the process is dynamically calculated according to the actual priority and running time of the process.
5 context_switch (): perform context switch
After selecting an appropriate process, it's time to perform the actual process switch, and we refocus on the _ _ schedule () method.
/ / kernel/sched/core.c:3395void _ _ schedule (bool preempt) {struct task_struct * prev, * next;. / / 1 selects a process next = pick_next_task (rq, prev,...) from the ready queue according to some algorithm; / /: 3459. If (prev! = next) {rq- > curr = next; / / 2 perform process context switching rq = context_switch (rq, prev, next...); / /:: 3485}.}
The core logic of process context switching is context_switch, and the corresponding logic is as follows:
/ / kernel/sched/core.c:2804struct rq * context_switch (... Task_struct * prev, task_struct * next...) {struct mm_struct * mm, * oldmm;... Mm = next- > mm; oldmm = prev- > active_mm;. / / 1 toggle virtual memory switch_mm_irqs_off (oldmm, mm, next);... / 2 toggle register status switch_to (prev, next, prev);
In the above code, I omitted some details and retained the core logic that we care about. The context_switch () core logic is divided into two steps, switching between virtual memory and register state. Next, let's expand these two pieces of logic.
5.1 toggle virtual memory
First of all, a brief introduction to several knowledge points of virtual memory:
The process cannot access physical memory directly, but indirectly through the mapping mechanism from virtual memory to physical memory.
Each process has its own independent virtual memory address space. For example, process A can have a virtual address 0x1234 mapped to a physical address 0x4567, and process B can have a virtual address 0x1234 mapped to 0x3456, that is, different processes can have the same virtual address. If they point to the same physical memory, the two processes can communicate through the memory sharing process.
Processes perform virtual memory to physical memory mapping through a multi-level page table mechanism. If we simply think of this mechanism as a map data structure, it can be understood that different processes maintain different map.
The translation of map is realized through multi-level page tables, and accessing multi-level page tables requires multiple access to memory, which is too inefficient. Therefore, the kernel uses TLB cache for frequently accessed projects, thanks to the locality principle.
Because different processes can have the same virtual address, these virtual addresses often point to different physical addresses, so TLB actually determines the physical address of a process in a unique way. ASID is called address space ID (Address Space ID). Each process is unique, which is equivalent to the tenant ID in the concept of multi-tenancy.
The virtual address space of the process is described by the data structure mm_struct, and the mm field in the process data structure task_struct points to this data structure, and the "map" information of the process mentioned above is hidden in the mm_struct.
The following articles will continue to analyze the introduction of virtual memory. Here, we only need to understand the above knowledge points. Let's go to the core logic of switching virtual memory:
/ / include/linux/mmu_context.h:14# define switch_mm_irqs_off switch_mm// arch/arm64/include/asm/mmu_context.h:241void switch_mm (mm_struct * prev, mm_struct * next) {/ / if two processes are not the same process if (prev! = next) _ _ switch_mm (next) .} / / arch/arm64/include/asm/mmu_context.h:224void _ switch_mm (struct mm_struct * next) {unsigned int cpu = smp_processor_id (); check_and_switch_context (next, cpu);}
Next, call check_and_switch_context to do the actual virtual memory switch operation:
/ / arch/arm64/mm/context.c:194void check_and_switch_context (struct mm_struct * mm, unsigned int cpu) {. U64 asid; / / get the ASID asid = atomic64_read (& mm- > context.id) of the next process; / /: 218... / / bind the ASID of the next process to the current CPU atomic64_set (& per_cpu (active_asids, cpu), asid) / /: 236 / / switch the page table, and switch "map" in our above, / / set ASID and "map" to the corresponding registers cpu_switch_mm (mm- > pgd, mm); / /: 248}
Check_and_switch_context is generally divided into two pieces of logic:
Bind the ASID of the next process to the current CPU so that the physical address translated by the TLB through the virtual address belongs to the next process.
Get the "map" of the next process, that is, the page table, and the corresponding field is "mm- > pgd", and then execute the page table switching logic, so that if the TLB fails, the current CPU can know which "map" to translate the virtual address.
Cpu_switch_mm involves a lot of assembly code, so I won't post it here, essentially binding the information of ASID and page table ("map") to the corresponding registers.
5.2 switch general register
After the virtual memory switch is completed, the switching process executes the relevant general registers with the corresponding logic of switch_to (prev, next...). This method is also the watershed of the switching process. The moment after the call, the current CPU executes the next code.
Take arm64 as an example:
/ / arch/arm64/kernel/process.c:422struct task_struct * _ switch_to (task_struct * prev, task_struct * next) {. / / actual switching method cpu_switch_to (prev, next); / /: 444.}
Cpu_switch_to corresponds to a classic assembly logic, which looks at a lot, but it is not difficult to understand.
/ / arch/arm64/kernel/entry.S:1040// x0-> pre// x1-> nextENTRY (cpu_switch_to) / / x10 store task_struct- > thread.cpu_context field offset mov x10, # THREAD_CPU_CONTEXT /: 1041 / / = = save pre context = / / x8 store prev- > thread.cpu_context add x8, x0, x10 / / save the prev kernel stack pointer to x9 mov x9 Sp / / Save x19 ~ x28 in cpu_context field / / stp means store pair x19, x20, [x8], # 16 stp x21, x22, [x8], # 16 stp x23, x24, [x8], # 16 stp x25, x26, [x8], # 16 stp x27, x28, [x8], # 16 / store x29 in fp field X9 there are sp fields stp x29, x9, [x8], # 16 / / Save the pc register lr to cpu_context 's pc field str lr, [x8] / / = load next context = / / x8 store next- > thread.cpu_context add x8, x1, x10 / / load fields in cpu_context into x19 ~ x28 / ldp means load pair ldp x19, x20 [x8], # 16 ldp x21, x22, [x8], # 16 ldp x23, x24, [x8], # 16 ldp x25, x26, [x8], # 16 ldp x27, x28, [x8], # 16 ldp x29, x9, [x8], # 16 / set pc register ldr lr, [x8] / / switch to next kernel stack mov sp X9 / / Save the next pointer to the sp_el0 register msr sp_el0, x1 retENDPROC (cpu_switch_to)
The logic of the above assembly can correspond to the contents of the operating system theory course, that is, the contents of the general register are saved to the corresponding fields in the data structure of the process, and then loaded into the general register from the corresponding fields in the data structure of the next process.
The 1041 line of code is the cpu_contxt field that gets the thread_struct thread field in the task_struct structure:
/ / arch/arm64/kernel/asm-offsets.c:53DEFINE (THREAD_CPU_CONTEXT, offsetof (struct task_struct, thread.cpu_context))
Let's analyze the corresponding data structure:
/ / include/linux/sched.h:592struct task_struct {... Struct thread_struct thread; / /: 1212...}; / / arch/arm64/include/asm/processor.h:129struct thread_struct {struct cpu_context cpu_context;...}
The cpu_context data structure is designed to hold the values of some general registers related to the process:
/ / arch/arm64/include/asm/processor.h:113struct cpu_context {unsigned long x19; unsigned long x20; unsigned long x21; unsigned long x22; unsigned long x23; unsigned long x24; unsigned long x25; unsigned long x26; unsigned long x27; unsigned long x28; / / corresponding x29 register unsigned long fp; unsigned long sp; / / corresponding lr register unsigned long pc;}
These values correspond to the code of the above assembly snippet one by one, and the reader should not need much assembly basis to analyze it.
In the above assembly, the pointer to next is saved in the last line of msr sp_el0, x1Magin x1 register, so that when you call the current macro later, it points to the next pointer:
/ / arch/arm64/include/asm/current.h:15static struct task_struct * get_current (void) {unsigned long sp_el0; asm ("mrs% 0, sp_el0": "= r" (sp_el0)); return (struct task_struct *) sp_el0;} / / current macro, # define current get_current () is used in many places
The core logic of process context switching ends here, and finally we make a summary.
Summary of 5.3 context_switch ()
Process context switching, the core is to switch virtual memory and some general registers.
For a process to switch virtual memory, it is necessary to switch the ASID and page table in the corresponding TLB, which is the "map" needed for virtual memory translation of different processes.
In the data structure of the process, there is an indirect field, cpu_context, which stores the value of the general register. The essence of register switching is to save the register of the previous process to the cpu_context field, and then load the fields in the cpu_context data structure of the next process into the register.
At this point, the study of "what is the logic of Linux process scheduling" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.