Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to parse the memory semantics of java multithreaded volatile

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

In this issue, the editor will bring you about how to analyze the memory semantics of java multithreaded volatile. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.

The volatile keyword is the lightest synchronization mechanism provided by the java virtual machine. Because the volatile keyword is related to the java memory model, we will make more additions to the java memory model before introducing the volatile keyword (which has been covered in previous blog posts).

1. Java memory Model (JMM)

JMM is a specification, which is mainly used to define the access rules of shared variables. The purpose is to solve all kinds of thread safety problems caused by the data inconsistency between multiple threads' local memory and shared memory and the instruction reordering of compiler processors, so as to ensure the atomicity, visibility and ordering of multithreaded programming.

JMM stipulates that all variables are stored in the main memory, each thread has its own working memory, the working memory of the thread stores a copy of the main memory of the variable used by the thread, all operations on the variable by each thread must be carried out in the working memory, and the variable values between threads must be transferred through the main memory.

JMM defines the interaction protocol between main memory and working memory in 8 operations:

1) lock: acts on main memory, identifying a variable as the exclusive state of a thread. 2) unlock: acts on the main memory, which releases a variable in the locked state. 3) read: acts on the main memory, transferring the value of a variable from the main memory to the thread's working memory. 4) load: acts on working memory by putting the value read from main memory into a copy of the variable in working memory. 5) use: acting on working memory, it passes the value of a variable from main memory to execution engine 6) assign: acting and working memory, it assigns a value received from the execution engine to the variable of working memory. 7) store: acts on working memory to transfer the value of a variable in working memory to main memory. 8) write: acts on the main memory, putting the values obtained by the store operation from the working memory into variables in the main memory.

These 8 operations and the restrictions on the rules of 8 operations can determine which memory access is thread-safe under concurrent conditions, which is more tedious. Jdk1.5 then proposed the happens-before rule to judge whether the thread is safe or not.

It can be understood that happens-before rules are the core of JMM. Happens-before is used to determine the order in which two operations are performed. The two operations can be in the same thread or in two threads.

Happens-before states that if one operation happens-before another operation, the result of the first operation is visible to the second operation (but this does not mean that the processor must execute in happens-before order and can be arbitrarily optimized as long as the execution result is not changed). Happens-before rules have been introduced in the previous blog post, so I won't repeat them here (http://www.cnblogs.com/gdy1993/p/9117331.html)

JMM memory rules are only a kind of rules, and the final implementation of the rules is implemented through the cooperation of java virtual machines, compilers and processors, and the memory barrier is the link between java virtual machines, compilers and processors.

The java reason encapsulates the specific implementation and control of these underlying layers, and provides keywords such as synchronized, lock and volatile to ensure multi-thread safety.

2. Volatile keyword

(1) volatile's guarantee of visibility

Before introducing the volatile keyword, let's take a look at this code:

/ / Thread 1 boolean stop = false; while (! stop) {doSomething ();} / Thread 2 stop = true

There are two threads: thread 1 and thread 2. Thread 1 keeps executing the doSomething () method when it is in stop==false; thread 2 sets stop to true and interrupts thread 1 when it is executed in a certain situation. Many people interrupt the thread in this way, but this is not safe. Because stop as an ordinary variable, thread 2's modification can not be immediately perceived by thread 1, that is, thread 1's modification to stop is only in its own working memory, and the stop in thread 2's working memory has not been modified, which may lead to thread being unable to interrupt, although this possibility is very small, but once it happens, the consequences will be serious.

This problem can be avoided by using volatile variable modification, which is the first important meaning of volatile:

Volatile-decorated variables ensure the visibility of different threads on the operation of this variable, that is, one thread modifies the value of the variable, and the new value is immediately visible to other threads.

The principle of volatile's visibility assurance:

For volatile-modified variables, when a thread modifies it, it forces the value to be flushed to the main memory, which makes the cache of the variable in their working memory invalid by other threads, so when other threads operate on the variable, it must be reloaded from the main memory.

(2) the guarantee of atomicity by volatile?

First, take a look at this piece of code (in-depth understanding of the java virtual machine):

Public class VolatileTest {public static volatile int race = 0; public static void increase () {race++;} public static final int THREAD_COUNT = 20; public static void main (String [] args) {Thread [] threads = new thread [thread _ COUNT]; for (Thread t: threads) {t = new Thread (new Runnable () {@ Override public void run () {for (int I = 0; I)

< 10000; i++) { increase(); } } }); t.start(); } while(Thread.activeCount() >

1) {Thread.yield ();} System.out.println (race); / / race < 200000}}

Race is a shared variable modified by volatile. Create 20 threads to increment this shared variable, each thread increments 10000 times. If volatile can guarantee atomicity, the final result of race must be 200000. However, it turns out that the value of race' is always less than 200000 each time the program runs, which also proves that volatile does not guarantee the atomicity of shared variable operations. The principle is as follows:

Thread 1 reads the value of race, and then the time slice allocated by cp ends, thread 2 reads the value of shared variable at this time, and performs self-increment operation on race, and refreshes the value after operation to main memory. At this time, thread 1 has read the value of race, so it still retains the original value. At this time, this value is the old value, and it is refreshed to the main memory after the self-increment operation of race, so the value in main memory is also the old value. This is why volatile can only guarantee that what is read is relatively new.

(3) the guarantee of order by volatile.

First, let's look at a piece of code like this:

/ / Thread 1 boolean initialized = false; context = loadContext (); initialized = true; / / Thread 2 while (! initialized) {sleep ();} doSomething (context)

Thread 2 uses the context variable to do something when the initialized variable is true; thread 1 is responsible for loading the context and setting the initialized variable to true after the load is complete. However, because initialized is only an ordinary variable, ordinary variables can only guarantee that all places that depend on the assignment result can get the correct value during the execution of the method, but can not guarantee that the assignment order of variables is consistent with the execution order of the program code. So it is possible that when thread 1 sets the initialized variable to true, the context is still not loaded, but thread 2 may execute the doSomething () method because it reads that initialized is true, which may have a very strange effect.

The second semantics of volatile is to disable reordering:

The operation of writing the volatile variable and any read and write operations prior to this operation will not be reordered

The read volatile variable operation and any read and write operations after that operation are not reordered.

(4) the underlying implementation principle of volatile

The underlying java language implements volatile semantics through memory barriers.

Write operations for volatile variables:

The ① java virtual machine inserts a release barrier (loadstore+storestore) before this operation, which prohibits the reordering of the write operation of the volatile variable and any read and write operations prior to the operation.

The ② java virtual machine inserts a storage barrier (storeload) after this operation, which enables writes to volatile variables to be synchronized to main memory.

The read operation for the volatile variable:

The ③ java virtual machine inserts a loadload before this operation so that each read of the volatile variable is reloaded from the main memory (flushing the processor cache)

The ④ java virtual machine inserts an acquisition barrier (loadstore+loadload) after the operation, allowing any read and write operations after the volatile to be reordered with the operation.

①③ guarantees visibility and ②④ guarantees orderliness.

(5) the relationship between volatile keyword and happens-before

The volatile rule in the Happens-before rule is: each subsequent read operation on the variable for the write happens-before of a volatile domain.

The writer thread executes write (), and then the reader thread executes the read () method. Each arrow in the figure represents a happens-before relationship, the black arrow is according to the program order rule, the blue arrow is according to the volatile rule, and the red arrow is derived from the transitivity, that is, the operation 2happens-before operation 3, that is, the update operation to the volatile shared variable is arranged before the subsequent read operation, and the modification to the volatile variable is visible to the subsequent volatile variable.

This is how to parse the memory semantics of java multithreaded volatile. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report