In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces "several important concepts of java process synchronization and the rules that the synchronization mechanism should follow". In the daily operation, it is believed that many people have doubts about several important concepts of java process synchronization and the rules that the synchronization mechanism should follow. The editor consulted all kinds of materials and sorted out simple and useful operation methods. I hope it will be helpful for you to answer the doubts of "several important concepts of java process synchronization and the rules that the synchronization mechanism should follow"! Next, please follow the editor to study!
1. Several important concepts of process synchronization
The main task of the process synchronization mechanism is to coordinate the execution order of multiple related processes, so that many processes executing concurrently can share system resources according to certain rules and cooperate well with each other, so that the execution between programs is reproducible.
There are two constraints between processes:
Indirect mutual restriction (mutual exclusion): the mutually restrictive relationship formed by the sharing of critical resources during concurrent execution, which requires mutually exclusive access to critical resources.
Direct constraint relationship (synchronization): a constraint relationship formed by the cooperation of multiple processes to accomplish the same task.
Critical resources: the resources that only one process can access at the same time is called critical resources, and the processes should adopt the mutually exclusive way to realize the sharing of critical resources. For example, printers, tape drives and so on are critical resources. We use the printer to explain why critical resources are allowed to be used by only one process at a time. Suppose that process An and process B access the printer at the same time, and two processes perform printing tasks at the same time, because of the concurrency of the process, the final result may be that the content typed by the printer is mixed with the words of both parties, so that the printing result is neither what process A wants nor what process B wants. It will only cause a waste of resources.
Critical section: the piece of code in a process that accesses critical resources. Obviously, if the processes can be mutually exclusive into their own critical area, the mutually exclusive access to critical resources between processes can be realized. Because each process should check the "gate" status of the critical resource to be accessed before entering the critical area (mainly check whether there is a process accessing the critical resource, if the critical resource is not accessed at this time, the corresponding "door" is open), if the "door" is open, the process can enter the critical area and close the "door" of the critical area. Otherwise, it means that there is a process in the critical area, and the current process cannot enter the critical area.
Instruction: an instruction is a statement in a machine language, which is a set of meaningful binaries. Because it is an instruction in machine language, an instruction can be equivalent to atomicity, responding to an interrupt (if any) only after an instruction has been executed.
Primitive: a process consisting of several instructions in which the user performs a certain function. One of the characteristics of primitive operations is "atomic operations", so primitives are not allowed to be interrupted during execution. Atomic operations are performed in the system state and are resident in memory.
The rules that the synchronization mechanism should follow:
Free access: when the "door" of the critical area is open, a requested process entering the critical area should be allowed to enter the critical area immediately.
Wait if you are busy: when the "door" of the critical zone is closed, other processes that try to enter the critical zone must wait to ensure mutually exclusive access to critical resources.
Limited waiting: for the process that requires to enter the critical area, you should ensure that you can enter your own critical area in a limited time, while falling into a state of "death and so on".
Concession wait: when the process cannot enter its own critical area, the processor should be released immediately to prevent the process from falling into a "busy wait" state.
Explain the two states of "death waiting" and "busy waiting" here, because there seems to be no difference between these two states, for example, "death waiting" and "busy waiting" have failed to enter the critical area, so what is the difference between the two?
Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community
Dead and so on: first of all, for the "dead wait" process, the process may be in a blocking state, waiting for another process to wake it up (Signal primitive), but the wake primitive has been unable to execute, for the blocked process, it has been in the state of "dead waiting" (sleeping, no one wakes you up), and in the dead waiting state, there is no processor.
Busy and so on: the busy state is easy to understand. The process in the busy state has been occupying the processor to constantly determine whether the critical area can be entered. During this period, the process has been running (busy, but all in vain). This is the "busy waiting" state. It is important to note that in a single processor system, busy waiting is very scary. Because the process in the busy wait will always occupy the processor (will not release the processor, the single CPU OS with busy waiting will not be able to execute other processes unless the system restarts, that is, the busy waiting state cannot be broken in a single CPU system).
In fact, whether it is "busy waiting" or "dead waiting", it is harmful to OS and should be avoided.
The synchronization mechanism of the process includes software synchronization mechanism, hardware synchronization mechanism, semaphore mechanism, management mechanism and so on, which can ensure the reproducibility of the concurrent execution of the program.
2. Software synchronization mechanism
In fact, when it comes to using software to achieve synchronization mechanism, what we think of most is Java multithreading. Java synchronizes threads through locks, synchronized, semaphores and other mechanisms, but threads share all the resources in the parent process, such as multiple employees sitting in an office and we can talk face to face, which can easily solve the problem of sharing resources. But between threads (like multiple offices around the world), if you need to share system resources, it is very difficult for processes to communicate directly through software, and it is very troublesome to share system resources with mutual exclusion. You need to use hardware resources to complete synchronization, you need to use a separate area in memory to keep the sign of whether the process is in the critical area.
Let's take a look at the following code first:
/ / inside1 and inside2 are signs that processes P1 and P2 are saved in memory, indicating whether the process is in the critical zone inside1 = false; inside2 = false; / / process P1 process P1 begin while (inside2); / / cycle test inside1: = true; critical section; inside1: = false; end; / / process P2 process P2 begin while (inside1); / / cycle test inside2: = true; critical section; inside2: = false; end
The logic of the code is very clear, that is, before process P1 or P2 wants to enter the critical area, first determine whether the other party is in the critical area, and if so, keep waiting in a loop, otherwise, enter the critical area and then "close the door" (padlock). At first glance, there seems to be no problem, but if the process is interrupted and the process P2 starts execution before executing the padlock (inside1 = true) during execution, such as P1, the lock of P1 has not been hung up, so process P2 can enter the critical area, and P2 is interrupted when it is executed in the critical area, and P1 resumes execution. Because the code to determine whether you can enter the critical area has been executed before, you can also enter the critical area at this time. In this case, if two processes enter the critical area at the same time, the execution of the process will make an error.
Although there are many coincidental events (small probability events) during the execution of the above processes P1 and P2, P1 interrupts before padlock, P2 interrupts when executing in the critical area, P1 and P2 processes can preempt CPU when the other process is interrupted, these events are combined, the probability is even smaller, but there are still problems.
At this time, you may think, I hang up the lock before judging, let's adjust the order of the code, please take a look:
/ /... / process P1 process P1 begin inside1: = true; while (inside2); / / cycle test critical section; inside1: = false; end; / / process P2 process P2 begin inside2: = true; while (inside1); / / cycle test critical area; inside2: = false; end
In this case, will there be no problem for processes P1 and P2 to execute concurrently? let's look at a situation in which P1 executes first and the padlock succeeds. Suppose that after success, P1 is interrupted and process P2 starts to execute. P2 can also be padlocked at this time, but when judging whether it can enter the critical area, it will not succeed and will always be judged in the loop. When P1 resumes execution, embarrassing things happen. P1 is also unable to enter the critical zone because P2 also hung up the lock.
The above is the implementation of two software synchronization mechanisms, the first is the double flag method to check first, and the second is the double flag method to check after, but neither of the two methods can really solve the problem of process synchronization. The double-flag first check method may make the two processes enter the critical area at the same time, and the double-mark post-check method may prevent both processes from entering the critical area, resulting in a deadlock problem.
Although the problem of entering the critical area of mutual exclusion of various processes can also be realized by software, such as the Peterson algorithm, it has certain difficulties and great limitations, so it is rarely used now, let's take a look at several other ways.
3. Hardware synchronization mechanism
The two examples we talked about in the software synchronization mechanism are interruptions between locking and judgment, and even leading to the entry into the critical area where mutual exclusion cannot be achieved, so can we not allow interruptions during this period? This needs to use the hardware, let's take a look at it.
3.1 off interrupt
This method is very domineering, isn't it possible for the process to interrupt between locking and judgment, so I turn off the interrupt before I start the test (the OS kernel does not respond to the interrupt signal), and turn on the interrupt after testing and locking. This can ensure the continuity between the two operations and ensure the mutually exclusive access of critical resources.
However, the off interruption will inevitably have many shortcomings: 1. The abuse of the right to interrupt may lead to serious consequences; 2. Too long shutdown interrupt time will affect the concurrency of the system and directly affect the resource utilization of the system; 3. Off interrupt cannot be adapted to multi-CPU systems (multi-CPU systems are beyond the scope of this article)
3.2Test and build (Test-and-Set, TS) instructions
We use off interrupts to solve the problem that response interrupts are not allowed between lock and judgment, but if we turn these two executions into one instruction, does this ensure that interrupts will not be responded between lock and judgment?
We can use a hardware instruction-"test and build" instruction TS (Test-and-Set) to achieve mutually exclusive access to critical resources. The general description of the TS instruction is as follows:
/ / TS instruction boolean TS (boolean * lock) {if (* lock = = false) {* lock = true; return true;} else {return false;}} / / the value of the lock stored in memory boolean lock; lock = false; / / the critical section can enter / / process P1Min P3. PN process Pi {/ /. While (! TS (& lock)); / / Loop request lock critical area; lock = false;// unlock, return critical resources}
We can regard the TS instruction as the execution process of the above TS function, and its execution process is inseparable, that is, a primitive. When the value of lock is false, the critical resource is idle, and when the value of lock is true, the resource is being used.
When you use the TS directive to manage the critical section, you need to set a Boolean variable lock for each critical resource. When a process needs to enter the critical section, you need to test the "lock" corresponding to the critical zone with the TS instruction first. If true is returned, the critical area is idle and can be entered and locked to prevent other processes from entering the critical zone again; if TS returns false, you must loop the request until the value returned by TS becomes true.
3.3 swap instruction
This instruction, also known as the swap instruction, is used to exchange the contents of two words, and its processing is described as follows:
Void swap (boolean * a, boolean * b) {boolean tmp; tmp = * a; * a = * b; * b = tmp;} / / the value of the lock stored in memory boolean lock; lock = false; / / the critical section can enter / / process P _ 1, P _ 2 and P _ 3. PN process Pi {/ /. Boolean key = true; do {swap (& lock, & key);} while (key! = false) / / Loop request lock critical area; lock = false;// unlock, return critical resources}
Swap instruction is similar to TS instruction, it also needs to set a Boolean variable lock for each critical resource, different in the process to use a local variable Key field to replace the value of lock, by judging the value of key can determine whether the critical resource is idle.
It should be noted that because the primitive merges multiple instructions into one instruction, it also does not respond to interrupts during the execution of the primitive, making it an atomic operation, during which it is equivalent to shielding interrupts, which is equivalent to the first hardware way we are talking about-off interrupts, so the instruction length of primitive operations should be short and concise, so the efficiency of the system can be ensured.
The above hardware instructions can effectively realize the mutual exclusion of the process, but when the resources are busy, other access processes must constantly test the process and be in a state of "busy waiting", which violates the principle of giving up power and waiting. it is a waste of processor time, and it is difficult to use them to solve complex process problems.
4. Semaphore mechanism
Semaphore mechanism is an effective process synchronization tool proposed by Edsger Wybe Dijkstra in 1965. The semaphore mechanism has been greatly developed in the long-term application, from the integer semaphore through the recorded semaphore, and then to the "semaphore set" mechanism.
It is important to note that in addition to initialization, semaphores can only be accessed through two standard atomic operations, wait (S) and signal (S), also known as P and V operations, which are primitive operations.
4.1 Integer semaphore
The integer semaphore is initially defined by Dijstra, and the integer semaphore is also very simple. There is only one integer quantity that represents the number of resources, which is generally represented by the S symbol. The wait and signal operations under the integer semaphore can be described as follows:
/ / Integer semaphore definition int S; / / P operate wait (S) {while (Svalue)
< 0) { block(S->List);}} / V Operation signal (semaphore * S) {S-> value++; if (S-> value list);}}
In the record type number, the initial value of S-> value represents the number of certain resources in the system; each wait operation on it means that the process requests a unit of this kind of resources, which reduces the number of such resources that can be allocated in the system, so it is described as S-> value-; when S.valuelist. Let's analyze the signal operation, first release a unit resource (S-> value++), and then determine whether there is a process waiting for the application semaphore, and if so, we should call the wakeup primitive to wake up a process from the waiting queue (the process queue linked by list).
From the above description, let's talk about the meaning represented by the value value in the record type number: 1.value > 0, which indicates the number of such resources remaining in the system; 2.value=0, which happens to be in a balanced state, the resources in the system are allocated out, and there are no processes waiting for resources, that is, there are no waiting processes in the list queue; 3.value=1&&...&&Sn > = 1) {for (item1teri break) } else {insert the process into the first condition that cannot be met (that is, Si=ti, and the allocation of resources changes from Si-- to the corresponding Ssignal operation of Si=Si-ti;. The difference is that multiple resources can be returned at a time, and the corresponding resource release code changes from Si++ to Si=Si+ti.
It should be noted that because and semaphores and semaphores set all the resources needed to execute the application process at one time, the advantages are simple, easy and safe, but the disadvantages are also obvious: 1. Resources are seriously wasted, seriously worsening the utilization of resources; 2. Hunger often occurs in the process (because the probability of individual resources being occupied is high, which can lead to delays in the implementation of the process because all resources are not requested).
5. Pipe process mechanism
Although semaphore mechanism is a convenient and effective process synchronization mechanism, each process accessing critical resources needs its own synchronization operations wait (S) and signal (S), which makes a large number of synchronization operations scattered in each process, which is not conducive to centralized thinking and abstraction, which makes programming very difficult and easy to produce a variety of programming errors. In this case, there is a new process synchronization tool-Monitors. It is worth mentioning that the pipe process was also put forward by Dijstra.
Hansen defines a pipe process as follows: a pipe process defines a set of operations performed by a concurrent process (on the data structure) that synchronizes the process and changes the data in the pipe process.
From the above definition, the pipe process is composed of four parts: 1. The name of the pipe program; 2. Description of shared data structure; 3. A set of processes for manipulating data structures; 4. Initialize the statement. Let's take a look at the grammatical description of the tube process:
/ / Monitor monitor_name {/ / pipe name share variable declarations; / / shared variable description cond cond_declarationas; / / conditional variable description public: / / procedure void P1 (...) {/ / a pair of data structure operation procedures.} void P2 (.) {.}. Void (...) {...}... {initilization code; / / initialization code}}
Through the description of the above code, do you feel very familiar! In fact, the pipe program contains the idea of object-oriented, which abstracts the shared resources and the operation of the shared resources into variables and methods, and encapsulates them in an object together with the synchronization mechanism, hiding the implementation details. But what you don't know is whether it's surprising that there was no object-oriented programming when the pipe program appeared.
The data structure encapsulated inside the pipe can only be accessed by the process inside the pipe (similar to private variables in Java). If you want to access the data structure inside the pipe outside the pipe, you must use the internal process (public-decorated methods inside the pipe, public methods in the Java class). If all processes want to access critical resources, they can only access them indirectly through the pipe process, and the pipe process only allows one process to enter the pipe program at a time, thus realizing the process mutual exclusion.
It is important to note that in order to achieve more complex process synchronization, the pipe program has added a condition variable. Usually, different condition variables are set according to the reason why the process is blocked or suspended. Each condition variable holds a linked list pointer to link all processes blocked or suspended because of the condition variable. At the same time, two P and V operations can also be represented as x.wait and x.signal. The meanings of these two operations are as follows:
Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community
X.wait: if the process that is calling the pipe is blocked or suspended due to the x condition, call x.wait to insert itself into the waiting queue of the x condition and release the pipe until the x condition changes.
X.signal: the process that is calling the pipe program changes because the x condition has changed (resources are used up and returned), then call x.signal to restart a process that is blocked or suspended due to the x condition. If there are more than one, select one of them. If not, continue to execute the original process without any wake-up operation.
For the above operation, there are actually some problems. Let's imagine that if process P1 is blocked due to x conditions, then when process P2 performs a x.signal operation to wake up P1, process P1 and P2 are in the pipe process at the same time, which is not allowed, so how to determine which performs which wait? This problem is also simple and can be dealt with in one of two ways:
Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community
P2 wait until P1 leaves the pipe or waits for another condition
P1 waits until P2 leaves the tube or waits for another condition.
There is also a lot of debate about which way to deal with it. Hoare adopts the first processing method, while Hansen adopts a compromise between the two, which stipulates that the signal operation performed by all processes in the process is the last operation of the process body, so process P2 exits the pipe immediately after performing the signal operation, so process P1 is resumed immediately.
However, this compromise approach of hansen stipulates the timing of release, which increases the difficulty of programming. Therefore, the current pipe process generally uses the way of Hoare, also known as Hall tube process. Compared with Hansen process and Hall process, one of its greater advantages is that it can be implemented based on PV operation primitives. In other words, wait and signal can be program processes rather than primitive implementations (language mechanisms can be used to implement Hall tube processes). This is very impressive. Hall programs can implement wait and signal operations on basic P and V operation primitives based on operating system libraries or high-level programming languages without extending the kernel of the operating system.
The schematic diagram of the pipe process is as follows. From the figure, we can see that each condition variable has a corresponding waiting queue. In addition to the process queue waiting for the pipe procedure to be called, there is also an emergency queue (high priority, which is inserted after the process calls the signal operation). For specific operations, we can combine the wait and signal operations in the condition variables of the Hall tube procedure.
The following is a description of the wait and signal operations on the condition variables of the Hall procedure:
/ / the semaphore definition used by the Hall pipe / / semaphore is the recording semaphore typedef struct {semaphore mutex; / / suspend its own semaphore with the mutually exclusive semaphore called with the pipe semaphore next; / / the process that issues the signal. The semaphore records the process waiting to call the pipe int next_count; / / the number of processes waiting on the next} interf. / / the condition variable typedef struct {semaphore xaccountseme; / / the semaphore related to the resource int xcount; / / the number of processes waiting on the x_sem} cond; / / the wait operation wait (cond declar, interf IM) {declar- > xaccountcount + corresponding to the condition variable in the Hall tube. / / add 1 if to the number of processes waiting on the condition variable declar (IM- > next_count > 0) {/ / determine whether there is a process in the high priority queue V (IM- > next); / / wake up the process due to calling the signal operation} else {V (IM- > mutex); / / if not, wake up a process waiting to enter the pipe} P (declar- > x_sem) / / immediately after releasing resources, suspend yourself declar- > xcount--; / / resume execution, restart execution and exit the pipe process. The number of processes waiting for the condition variable declar minus 1} / / the signal operation signal (cond declar, interf IM) {if (declar- > x_count > 0) {/ / determine whether there is a process waiting for the condition variable IM- > next_count++ / / after suspending yourself, add 1 V to the number of processes suspended by calling signal (declar- > x_sem); / / wake up a process waiting for conditional variables P (IM- > next); / / suspend yourself immediately after releasing resources, and enter the high priority queue IM- > next_count--; / / after resuming execution, the number of processes waiting to be called minus 1}}
If we look at the code above, the wait and signal operations are different from the P and V primitives, and wait and signal are implemented on the basis of P and V. First, let's take a look at the wait operation. After waking up a process (a high priority queue or a queue waiting for a pipe call), we immediately suspend ourselves and insert ourselves into the waiting queue of the corresponding condition variable (the queue at the top left in the figure above). When the wakeup is executed again, the corresponding number of waits is reduced by 1. Let's take a look at the signal operation, first determine whether there is a process waiting for the condition variable, if not, nothing can be executed; if there is a process in the waiting queue of the condition variable, wake up one from the queue, suspend yourself, and insert it into the emergency queue.
Hall tube process in the wait, signal operation is more abstract, can be combined with the picture view, can help to understand.
At this point, the study of "several important concepts of java process synchronization and the rules that the synchronization mechanism should follow" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.