Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand java volatile

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

What this article shares with you is about how to understand java volatile. The editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.

Characteristics of volatile

When we declare the shared variable as volatile, the read / write to this variable will be very special. A good way to understand the volatile feature is to think of individual reads / writes to volatile variables as synchronizing these individual reads / writes using the same monitor lock. Let's illustrate it with a specific example. Take a look at the following sample code:

Class VolatileFeaturesExample {volatile long vl = 0L; / / use volatile to declare a 64-bit long variable public void set (long l) {vl = l; / / write of a single volatile variable} public void getAndIncrement () {vl++; / / compound (multiple) volatile variable read / write} public long get () {return vl; / / read}} of a single volatile variable

Suppose there are multiple threads calling the three methods of the above program, which is semantically equivalent to the following program:

Class VolatileFeaturesExample {long vl = 0L; / 64-bit long ordinary variable public synchronized void set (long l) {/ / A pair of single ordinary variables are synchronized with the same monitor vl = l;} public void getAndIncrement () {/ / ordinary method call long temp = get (); / / call the synchronized read method temp + = 1L / / ordinary write operation set (temp); / / call the synchronized write method} public synchronized long get () {/ / synchronize return vl; with the same monitor for a pair of single ordinary variables.

As shown in the sample program above, a single read / write to a volatile variable is synchronized with a read / write to a normal variable using the same monitor lock, and they perform the same way.

The happens-before rule of the monitor lock ensures memory visibility between the two threads that release the monitor and get the monitor, which means that a read of a volatile variable will always see (any thread) the last write to the volatile variable.

The semantics of the monitor lock determines the atomicity of the execution of the critical section code. This means that even if it is a 64-bit long and long variable, as long as it is a volatile variable, reading and writing to that variable will be atomic. In the case of multiple volatile operations or composite operations such as volatile++, these operations are generally not atomic.

In short, the volatile variable itself has the following characteristics:

Visibility. When reading a volatile variable, you can always see (any thread) the last write to the volatile variable.

Atomicity: reading / writing to any single volatile variable is atomic, but composite operations like volatile++ are not atomic.

Happens before relationship established by volatile write-read

The above is about the characteristics of volatile variables. For programmers, the impact of volatile on thread memory visibility is more important than volatile's own features, and we need to pay more attention to it.

Starting with JSR-133, the write-read of the volatile variable enables communication between threads.

From the perspective of memory semantics, volatile has the same effect as monitor lock: volatile write and monitor release have the same memory semantics; volatile read and monitor acquisition have the same memory semantics.

Look at the following sample code that uses the volatile variable:

Class VolatileExample {int a = 0; volatile boolean flag = false; public void writer () {a = 1; / 1 flag = true; / / 2} public void reader () {if (flag) {/ / 3 int I = a; / / 4. }}}

Suppose that after thread An executes the writer () method, thread B executes the reader () method. According to the happens before rules, the happens before relationships established by this process can be divided into two categories:

According to the program order rules, 1 happens before 2; 3 happens before 4.

According to volatile rules, 2 happens before 3.

According to the transitivity rule of happens before, 1 happens before 4.

The graphical representation of the above happens before relationship is as follows:

In the figure above, the two nodes linked by each arrow represent a happens before relationship. Black arrows represent program order rules, orange arrows represent volatile rules, and blue arrows represent happens before guarantees provided by combining these rules.

Here, thread A writes a volatile variable, and thread B reads the same volatile variable. All shared variables visible to thread A before writing the volatile variable will become visible to thread B immediately after thread B reads the same volatile variable.

Volatile write-read memory semantics

The memory semantics written by volatile are as follows:

When you write a volatile variable, JMM flushes the shared variables in the local memory corresponding to the thread to the main memory.

Taking the above example program VolatileExample as an example, suppose thread A first executes the writer () method, and then thread B executes the reader () method. Initially, both flag and an in the local memory of both threads are in the initial state. The following figure is a schematic diagram of the status of the shared variable after thread A performs a volatile write:

As shown in the figure above, after reading the flag variable, local memory B has been set to invalid. At this point, thread B must read the shared variable from main memory. The read operation of thread B will cause the values of the shared variables in local memory B to be the same as those in main memory.

If we combine the volatile write and volatile read steps, after thread B reads a volatile variable, the values of all shared variables visible to thread A before writing to the volatile variable will immediately become visible to thread B.

Here is a summary of the memory semantics of volatile write and volatile read:

Thread A writes a volatile variable, which essentially sends a message to a thread that will read the volatile variable next.

Thread B reads a volatile variable, essentially receiving a message sent by a thread that modified the shared variable before writing the volatile variable.

Thread A writes a volatile variable, and then thread B reads the volatile variable, which is essentially thread A sending a message to thread B through main memory.

Implementation of volatile memory semantics

Next, let's look at how JMM implements the memory semantics of volatile write / read.

We mentioned earlier that overordering is divided into compiler reordering and processor reordering. In order to implement volatile memory semantics, JMM restricts these two types of reordering respectively. The following is a table of volatile reordering rules developed by JMM for the compiler:

Can you reorder the second operation?

The first operation is ordinary read / write volatile read volatile write ordinary read / write NOvolatile read NONONOvolatile write NONO

For example, the last cell in the third line means that in program order, when the first operation is a read or write of a normal variable, and if the second operation is volatile, the compiler cannot reorder these two operations.

We can see from the above table:

When the second operation is volatile write, no matter what the first operation is, it cannot be reordered. This rule ensures that operations before volatile writing are not reordered by the compiler after volatile writing.

When the first operation is a volatile read, no matter what the second operation is, it cannot be reordered. This rule ensures that operations after volatile read are not reordered by the compiler before volatile read.

When the first operation is volatile write and the second operation is volatile read, it cannot be reordered.

In order to implement the memory semantics of volatile, when generating bytecode, the compiler inserts a memory barrier in the instruction sequence to prevent certain types of processors from reordering. It is almost impossible for the compiler to find an optimal arrangement to minimize the total number of insertion barriers, so JMM adopts a conservative strategy. The following is the JMM memory barrier insertion strategy based on the conservative policy:

Insert a StoreStore barrier in front of each volatile write operation.

Insert a StoreLoad barrier after each volatile write operation.

Insert a LoadLoad barrier after each volatile read operation.

Insert a LoadStore barrier after each volatile read operation.

The above memory barrier insertion strategy is very conservative, but it can ensure that the correct volatile memory semantics can be obtained in any processor platform and any program.

The following is a schematic diagram of the instruction sequence generated after volatile writes are inserted into the memory barrier under a conservative strategy:

The LoadLoad barrier in the image above is used to prevent the processor from reordering the upper volatile reads with the normal reads below. The LoadStore barrier is used to prevent the processor from reordering the upper volatile read with the lower normal write.

The above memory barrier insertion strategies for volatile writes and volatile reads are very conservative. In actual execution, the compiler can omit unnecessary barriers according to specific circumstances, as long as the write-read memory semantics of volatile is not changed. Let's illustrate it with a specific example code:

Class VolatileBarrierExample {int a; volatile int v1 = 1; volatile int v2 = 2; void readAndWrite () {int I = v1; / / first volatile read int j = v2; / / second volatile read a = I + j; / / ordinary write v1 = I + 1; / / first volatile write v2 = j * 2 / / the second volatile writes}... / / other methods}

For the readAndWrite () method, the compiler can do the following optimizations when generating bytecode:

As mentioned earlier, x86 processors only reorder write-read operations. X86 does not reorder read-read, read-write, and write-write operations, so the memory barriers corresponding to these three types of operations are omitted in x86 processors. In x86, JMM only needs to insert a StoreLoad barrier after volatile writes to correctly implement volatile write-read memory semantics. This means that in x86 processors, volatile writes are much more expensive than volatile reads (because performing the StoreLoad barrier is more expensive).

Why should JSR-133 enhance the memory semantics of volatile

In the old Java memory model before JSR-133, although reordering between volatile variables was not allowed, the old Java memory model allowed reordering between volatile variables and normal variables. In the old memory model, the VolatileExample sample program might be reordered into the following timing to execute:

In the old memory model, when there was no data dependency between 1 and 2, 1 and 2 could be reordered (similar to 3 and 4). The result is that when thread B executes 4, it may not be possible to see the changes made by writer thread A to the shared variable during execution 1.

Therefore, in the old memory model, the write-read of volatile has no monitor release-acquisition of the memory semantics. In order to provide a more lightweight mechanism for communication between threads than monitor locks, the JSR-133 expert group decided to enhance the memory semantics of volatile: strictly restrict the reordering of volatile variables and ordinary variables by compilers and processors, and ensure that volatile write-read and monitor release-access have the same memory semantics. From the perspective of compiler reordering rules and processor memory barrier insertion strategy, as long as the reordering between volatile variables and ordinary variables may destroy the memory semantics of volatile, such reordering will be prohibited by compiler reordering rules and processor memory barrier insertion strategy.

Because volatile only guarantees atomicity for reading / writing to a single volatile variable, the mutex execution of the monitor lock ensures atomicity for the entire critical section code. Monitor locks are more powerful than volatile in terms of functionality; volatile has advantages in terms of scalability and execution performance. If the reader wants to use volatile instead of monitor locks in the program, please be careful.

The above is how to understand java volatile, the editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report