In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "what is the memory semantics of Java memory model volatile". In daily operation, I believe many people have doubts about the memory semantics of Java memory model volatile. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts of "what is the memory semantics of Java memory model volatile?" Next, please follow the editor to study!
1. Characteristics of volatile
A good way to understand the volatile feature is to think of a single read / write to a volatile variable as synchronizing a single read / write operation using the same lock.
Code example:
Package com.lizba.p1;/** *
* volatile example *
* * @ Author: Liziba * @ Date: 21:34 on 2021-6-9 * / public class VolatileFeatureExample {/ * * declare the 64-bit long variable * / volatile long v1 = 0l using volatile; / * * single volatile write operation * @ param l * / public void set (long l) {v1 = l } / * compound (multiple) volatile read & write * / public void getAndIncrement () {v1 read read;} / * single volatile variable read * @ return * / public long get () {return v1;}}
Suppose there are multiple threads calling the three methods of the above program, which is semantically equivalent to the following program.
Package com.lizba.p1;/** *
* example of synchronized equivalence *
* * @ Author: Liziba * @ Date: 21:46 on 2021-6-9 * / public class SynFeatureExample {/ * * define a 64-bit normal variable * / long v1 = 0L; / * write to the v1 variable using a synchronization lock * @ param l * / public synchronized void set (long l) {v1 = l } / * perform + 1 operation on v1 through synchronous read and write methods * / public void getAndIncrement () {long temp = get (); / v1 plus one temp + = 1L; set (temp) } / * read v1 using synchronization locks * @ return * / public synchronized long get () {return v1;}}
As shown in the above two programs, a single read / write operation of a volatile variable is synchronized with the read / write operation of a normal variable using the same lock, and the execution effect between them is the same.
The above code summary:
The lock's happens-before rule guarantees memory visibility between the thread that releases the lock and the thread that acquires the lock, which means that reading a volatile variable will always see (any thread) the last write to the volatile variable.
The semantics of the lock determines that the execution of the code in the critical area is atomic. This means that even if it is a 64-bit long and long variable, as long as it is a volatile variable, the read / write to that variable is atomic. In the case of multiple volatile operations or composite operations such as volatile++, these operations are generally not atomic.
Summarize the volatile features:
Visibility. When reading a volatile variable, you can always see (any thread) the last write to the volatile variable.
Atomicity. Reading / writing to any volatile variable is atomic, but composite operations such as volatile++ are not atomic.
2. Happens-before relationship established by volatile write-read
For programmers, we need to pay more attention to volatile's visibility into thread memory.
Starting with JDK1.5 (JSR-133), the write-read of the volatile variable enables communication between threads. From a memory semantic point of view, volatile write-read and lock release-acquisition have the same memory effect.
Volatile write and lock release have the same memory semantics.
The read and lock acquisition of volatile have the same memory semantics.
Code example:
Package com.lizba.p1;/** *
* *
* * @ Author: Liziba * @ Date: 22:23 on 2021-6-9 * / public class VolatileExample {int a = 0; volatile boolean flag = false; public void writer () {a = 1; / / 1 flag = true / / 2} public void reader () {if (flag) {/ / 3 int I = a; / 4 System.out.println (I);}
Suppose that after thread An executes the writer () method, thread B executes the reader () method. According to happens-before rules
The happens-before relationship established by this process is as follows:
According to the program order rules, 1 happens-before 2, 3 happens-before 4.
According to volatile rules, 2 happens-before 3.
According to the transitivity rule of happens-before, 1 happens-before 4.
The figure shows the above happens-before relationship:
Summary: here thread A writes a volatile variable and thread B reads the same volatile variable. All shared variables visible to thread A before writing the volatile variable will be visible to thread B immediately after thread B reads the same volatile variable.
3. Volatile write-read memory semantics
Memory semantics written by volatile
When you write a volatile variable, JMM flushes the value of the shared variable in the local memory corresponding to the thread to the main memory.
Taking the above VolatileExample as an example, suppose thread An executes the writer () method first, and then thread B executes the reader () method. Initially, both flag and an in the local memory of both threads are in the initial state.
A shares a schematic diagram of variable status after performing volatile writing.
After thread A writes the flag variable, the values of the two shared variables updated by thread An in local memory are flushed to main memory, and the values in local memory and main memory of An are the same.
Memory semantics of volatile read
When reading a volatile variable, JMM sets the local memory for that thread to be invalid. The thread will then read the shared variable from the main memory.
B after performing a volatile read, the status diagram of the shared variable:
After reading the flag variable, the value contained in local memory B has been set to invalid. At this point, thread B must reread the shared variable from main memory. The read operation of thread B will cause the local memory B to match the value of the shared variable in the main memory.
Summarize the memory semantics of volatile write and volatile read
Thread A writes a volatile variable, which essentially sends a message to a thread that will read the volatile variable next.
Thread B reads a volatile variable, essentially receiving a message sent by a thread that modified the shared variable before writing the volatile variable.
Thread A writes a volatile variable, and then thread B reads the volatile variable, which is essentially thread A sending a message to thread B through main memory.
4. Volatile memory semantic implementation.
Program reordering is divided into compiler reordering and processor reordering (my previous blog post has written ha). In order to implement volatile memory semantics, JMM forbids these two types of reordering respectively.
Volatile reordering rules table:
Can you reorder the second operation?
The first operation is ordinary read / write volatile read volatile write ordinary read / write
NOvolatile read NONONOvolatile write
NONO
Example above: the last cell in the first line means that when the first operation in the program is normal read / write, and if the second operation is volatile, the compiler cannot reorder.
Summarize the figure above:
The second operation is that when volatile writes, it cannot be reordered. Ensure that operations before volatile writing are not reordered by the compiler after volatile
When the first operation is volatile read, it cannot be reordered. Ensure that operations after volatile reading are not reordered by the compiler before volatile
When the first operation is volatile write and the second operation is volatile read, it cannot be reordered.
In order to implement the memory semantics of volatile, when generating bytecode, the compiler inserts a memory barrier in the instruction sequence to prevent certain types of processors from reordering.
JMM adopts a conservative strategy, memory barrier insertion strategy, as follows:
Insert an StoreStore barrier in front of each volatile write barrier.
Insert a StoreLoad barrier behind each volatile write operation
Insert a LoadLoad barrier after each volatile read operation.
Insert a LoadStore barrier after each volatile read operation.
Conservative strategy can ensure that the correct volatile memory semantics can be obtained in any program on any processor platform.
Under conservative strategy, volatile writes the instruction sequence diagram generated after the memory barrier is inserted:
Explanation:
The StoreStore barrier ensures that all normal writes in front of volatile are visible to any processor before it is written. This is because the StoreStore barrier ensures that all normal writes above are flushed to main memory before volatile writes.
Under conservative strategy, volatile reads the instruction sequence diagram generated after the memory barrier is inserted:
Explanation:
The LoadLoad barrier is used to prevent the processor from reordering the upper volatile reads with the normal reads below. The LoadStore barrier is used to prevent the processor from reordering the upper volatile read with the lower normal write.
The above memory barrier insertion strategies for volatile writes and volatile reads are very conservative. In actual execution, the compiler can omit unnecessary barriers according to specific circumstances, as long as the write-read memory semantics of volatile is not changed.
Code example:
Package com.lizba.p1;/** *
* example of volatile barrier *
* * @ Author: Liziba * @ Date: 23:48 on 2021-6-9 * / public class VolatileBarrierExample {int a; volatile int v1 = 1; volatile int v2 = 2; void readAndWrite () {/ / first volatile read int I = v1; / / second volatile read int j = v2; / / normal write a = I + j / / the first volatile writes v1 = I + 1; / / the second volatile writes v2 = j * 2;} / /. Other methods}
For readAndWrite () of VolatileBarrierExample, the compiler can make the following optimizations when generating bytecode:
Note: the final StoreLoad barrier cannot be omitted. Because after the second volatile is written, the program return. At this point, the compiler cannot accurately determine whether there will be volatile read and write operations, and for security reasons, the compiler usually inserts a StoreLoad barrier here.
The above optimization can be targeted at any processor platform, but because different processors have different "tightness" processor memory models, memory barrier insertion can continue to be optimized according to the specific processor memory model.
Optimization of X86 processor platform
The X86 processor only reorders write-read operations. X86 does not reorder read-read, read-write, and write-write, so the X86 processor omits the memory barriers for these three types of operations. On the X86 platform, JMM only needs to insert a StoreLoad barrier after volatile write to correctly implement volatile write-read memory semantics. At the same time, this means that in X86 processors, the cost of volatile writing is much greater than that of reading.
5. Comparison between volatile and lock
Functionally:
Locks are more powerful than volatile
In terms of scalability and performance:
Volatile has more advantages.
At this point, the study on "what is the memory semantics of Java memory model volatile" is over. I hope to be able to solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.