Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Essence and Design principle of operating system-- multiprocessor and Real-time scheduling

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Overview

For multiprocessor scheduling, this paper summarizes the problems that may be caused by multiple processors and some design problems; for real-time scheduling, two scheduling methods are summarized: time-limited scheduling and rate monotone scheduling.

1 multiprocessor scheduling

Multiprocessor systems can be divided into the following categories:

Loosely coupled, distributed processors, clustering: it consists of a series of relatively autonomous systems, each with its own memory and Imax O channels.

Specialized processors: an example of the Ihammer O processor, where there is a general-purpose main processor that is controlled by the main processor and provides services to the main processor.

Tightly coupled multiprocessors: a series of processors that share the same memory and are completely controlled by the operating system, which are analyzed in detail here.

1.1 problems caused by multiprocessors

There are three issues that need to be considered in scheduling:

Assign the process to the processor.

Use multiprogramming on a single processor.

The actual dispatch of a process.

1.1.1 assign processes to processors

If the multiprocessor structure is unified, that is, there is no special advantage in the access of memory and Imax O devices, the simplest way is to treat the processor as a resource pool and then allocate it to the corresponding processor according to the requirements. So do you want static or dynamic allocation?

If the life cycle of a process is allocated to one processor (static allocation), you need to maintain a short-range queue for each processor. The advantage is that the overhead is low (all processes are allocated only once. Group scheduling when using a dedicated processor policy), the disadvantage is that one processor may be idle all the time, while the other processor has a lot of work backlog. Avoid this by using a common queue. All processes are entered and assigned to any available processor. In the tightly coupled shared memory structure, all processors can get the context information of any process (the overhead of the scheduling process is independent of the processor to which it is scheduled). Another method of allocation is dynamic load balancing (dynamic allocation), where threads can be transferred in queues corresponding to different processors, and Linux uses this strategy.

In the process of assigning to the processor, there are two main methods: master-slave and peer-to-peer.

Master and slave: the main core functions of the operating system run on a particular processor, while other processors may only be used to execute user processes. The load scheduling job of the master processor, if the slave processor needs services such as Imax O invocation, it must send a request to the master processor and then wait for the service to execute. Because the main processing has control over all memory and Imax O resources, conflict resolution can be simplified, so there is little need for single-processor multiprogramming operating system process enhancements. At the same time, the failure of the main processor will lead to the failure of the whole system, and the main processor may be the performance bottleneck.

Peer-to-peer: the operating system can execute on any processor. Each processor schedules itself from the entry pool. The stability and performance of the operating system is higher than that of the master-slave test, but it also increases the complexity of the operating system: the operating system must ensure that a process cannot be selected by multiple processors and that the process cannot be lost in the queue. Therefore, some technologies that can solve synchronous competitive resource requests should be adopted.

Of course, there are ways before two extremes: a subset of processors can be provided instead of one processor for kernel processing, and the differences in requirements between kernel processes and other processes can be managed based on priority and execution history.

1.1.2 using multiprogramming on a single processor

If each process uses static allocation, there is a new question: does this processor support multiprogramming? If it is blocked frequently after binding because it is waiting for Istroke O or for concurrency / synchronization, there will be a waste of processors.

1.1.3 process dispatching

This is about choosing which process to run. On multiple single processors, using priority or history-based advanced scheduling algorithms performs better than simple FCFS policies. On multiprocessors, this complexity may be unnecessary or even counterproductive, while a simple approach may be more effective. There are new issues for thread scheduling that are more important than priority or execution history.

1.2 process scheduling

In practice, most traditional processor systems use multiple priority-based queues and all execute in the same processing pool. It is not specified that a dedicated processor or a processor uses a queue.

In a dual-processor and single-processor system, the processing speed of each processor of multiprocessing is half that of single processing. After using FCFS scheduling, rotation method and minimum remaining time method, we get the comparison of process service time, that is, the time of using the processor in the life cycle of a process. The time slice length of the rotation method is larger than that of context switching and shorter than the average service time. The execution result depends on the change of service time between each process. from the difference of each service time increases from 0, we can draw a conclusion: for dual processors, the gap of different scheduling algorithms is not as large as that of single processing. this conclusion is more certain when the number of processors increases. Therefore, multiprocessors can use simple FCFS or FCFS of static priority scheme. (in multiprocessor FCSF scheduling, a long process is rarely interrupted, and other processes can use other processors, so the waiting time for shorter processes is less than that of a single processor.)

(see Sauer, C. and Chandy, K. Computer Systems Performance Mondling. Englewood Cliffs, NJ:Prentice Hall, 1981)

1.3 Thread scheduling

There are four more prominent methods:

1.3.1 load distribution

Instead of being assigned to a fixed processor, the system maintains a global queue of ready processes, selecting a process from the queue when the processor is idle.

The advantages are:

The load is evenly distributed on each processor, and when there is work to do, no processor is idle.

No centralized scheduler is required. The operating system scheduling routine runs on an idle processor to select a ready thread.

You can choose single-process scheduling scenarios to organize and access global queues, including those based on priority and execution history or preprocessing requests. Load distribution scheme:

FCFS: select the ready thread at the end of the global shared queue for the idle processor until it is completed or blocked.

Minimum number of threads first: idle ready queues are organized into priority queues. If a job has the least number of scheduling threads, it has the highest priority. If the priority is the same, priority is given to the first-arrived job. The scheduled thread runs until it is blocked or finished.

Preemptive minimum number of threads first: if a newly arrived job contains fewer threads than the currently executing job, preemptive execution. The experimental results show that the effect of FCFS is better than the other two scheduling strategies.

The disadvantages are:

The central queue occupies the memory area that must be mutually exclusive, and bottlenecks may occur when a large number of processors look for work at the same time.

Preempted threads may not resume running on the same processor, which is less efficient if each processor is equipped with a local cache.

If all threads are treated as a common thread pool, the threads of a program cannot be accessed by multiple processors at the same time. If a program's threads require a high degree of cooperation, the process switching involved may affect performance.

1.3.2 Group scheduling

A set of related processes is based on an one-to-one principle and is scheduled to run on a set of processors. A significant way to improve program performance is to minimize process switching overhead, that is, to allow associated threads to run at the same time. Different group scheduling time can be weighted according to the number of threads in each application to reduce the time wasted by the processor.

Advantages:

If closely related processes execute in parallel, synchronous blocking may be reduced, process switching will be reduced, and performance will be improved.

Scheduling overhead may be reduced because a decision can affect many processors and processes.

1.3.3 dedicated processor allocation

Contrary to the method of load distribution, the specified thread runs to a processor. The number of processors is equal to the number of threads, and when the thread ends, the processor returns to the processor pool.

It seems to be a waste of processor time, that is, if an application thread is blocked and waits for Icano or synchronization with other threads, the process will remain idle and is non-multiprogramming. However, there are two situations that can explain the use of this strategy:

In a highly parallel system, there are dozens or hundreds of processors, each processor occupies only a small part of the total cost of the system, processor utilization is not an important factor to measure effectiveness or performance.

In a certain degree of declaration cycle, it is necessary to avoid process switching and speed up the running speed of the program, similar to the memory allocation problem of a single processor in the operating system, that is, how many processors are allocated to a program at a certain time (how many page frames are allocated to the process at a certain time).

In other scenarios, a working set term similar to virtual memory is proposed-active working set: the minimum number of destination threads that must be scheduled on the processor at the same time to ensure that the application runs at an acceptable speed. As with the memory management scheme, the failure of all elements in the scheduling activity workset can cause the processor to jitter: when scheduling other threads, the threads that will be scheduled for execution in the future are canceled. Similar to memory fragmentation, processor fragmentation refers to the number and appropriateness of some processors remaining that do not meet the needs of the waiting application. Group scheduling and dedicated processor allocation deliberately avoid these problems.

1.3.4 dynamic scheduling

The number of threads is variable and the utilization is improved by adjusting the load.

The processor is assigned to a job, and each job maps its part of the runnable task to a thread, running using the currently assigned processor. The decision about which subset to run and the need to suspend that thread is left to a single application decision.

Scheduling policy:

When a request comes:

If a processor is available, it is used and executed.

Otherwise, if it is a newly arrived request (not a request waiting without meeting the allocation requirements), the job of the currently allocated multiple processors is selected and the processor is assigned to execute the new job.

If the allocation cannot be satisfied, it remains incomplete until the satisfied processor is available, or a request is made to abolish it.

When the processor is released: scan the request queue waiting for not meeting the allocation and allocate the processor resources according to FCFS.

2 real-time scheduling

A real-time task or process means that the execution of the process is related to some processes, functions, or sets of events outside the computer system, and one or more deadlines must be met in order to interact effectively and correctly with the external environment.

A real-time operating system is an operating system that can manage real-time processes. In the real-time operating system, the traditional scheduling algorithm principle is not applicable, the key factor is to meet the deadline, to a large extent rely on preemption and the algorithm that responds to the relative deadline is suitable for this context.

2.1 scheduling within a limited period

The goal is to start real-time tasks as quickly as possible, so fast interrupt processing and task scheduling are emphasized. Despite dynamic resource requests and conflicts, handling overloads, and software and hardware failures, real-time applications focus not on absolute speed, but on completing or starting tasks at the most valuable time.

Effective and appropriate methods for real-time task scheduling are based on additional information for each task, including:

Ready time: the task begins to prepare for execution. For periodic or repetitive tasks, the time series can be known in advance. For aperiodic tasks, either know in advance, or the operating system only knows when the task is ready.

Last time to start: the time at which you must start.

Last time to complete: the time that must be completed, a typical real-time application has a last time to start or a last time to complete.

Processing time: the time from the start of execution to completion, in some cases the operating system measures the exponential average rather than providing this time.

Resource requirements: the collection of resources required for execution (except for processors).

Priority: hard real-time tasks may have absolute priority, and missing may cause the system to fail. If the system is to run anyway, hard and soft real-time tasks can be assigned relevant priorities to guide the scheduler.

Subtask structure: a task can be broken down into subtasks that must be run or optional.

The related strategies are:

Earliest deadlines: select the most recent deadlines in ready tasks.

The earliest deadline with voluntary idle time: only the most recent deadline tasks are called first, even if you have to wait for tasks that are not yet ready.

2.2 rate monotone scheduling

Rate monotone scheduling RMS assigns priority to tasks based on their cycles. The shorter the cycle (the higher the rate), the higher the priority, and the period and rate are reciprocal to each other. Processor utilization is execution time / cycle time.

Since the processor utilization sum of each task is less than 1, you can determine whether the task can be executed in real time by calculating the total processor utilization. RMS holds for the following inequality: the total utilization sum of n tasks is not greater than n * (2 ^ (1)-1).

The reasons for choosing RMS are:

The formula is conservative and usually reaches 90%.

Most hard real-time systems also have soft time components, such as non-critical displays and built-in self-tests, which can be executed at low priority, taking up processor time not used in the RMS scheduling of hard real-time tasks.

RMS is easy to achieve stability. When deadlines cannot be met due to overloading and instantaneous errors, deadlines should be guaranteed for some basic tasks as long as they are schedulable. If you use the static priority allocation method, you only need to ensure that the basic tasks have a relatively high priority; if you use RMA, you can make the basic tasks have a shorter cycle, or modify the RMS priority to explain the implementation of the basic tasks; for the earliest deadline scheduling, the priority of periodic tasks varies from one cycle to another, making the deadlines of the basic tasks difficult to meet.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report