In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you about how to deeply understand the Java memory model JMM. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
Java memory model
The Java memory model (JMM) is an abstract concept that does not really exist. It describes a set of rules or specifications that define how variables in the program (including instance fields, static fields, and elements that make up array objects) are accessed. This paper attempts to shield the memory access differences of various hardware and operating systems, so as to achieve the consistent memory access effect of Java programs on various platforms.
Note the difference between JMM and JVM memory region partitioning:
JMM describes a set of rules around atomicity, order, and visibility.
Similarity: there are shared and private areas
Main memory and working memory
Registers on the processor read and write several orders of magnitude faster than memory. In order to solve this speed contradiction, a cache is added between them.
Adding caching brings a new problem: cache consistency. If multiple caches share the same main memory area, then the data of multiple caches may be inconsistent, and some protocols are needed to solve this problem.
All variables are stored in main memory, and each thread has its own working memory, which is stored in a cache or register, keeping a copy of the main memory copy of the variables used by the thread.
Threads can only directly manipulate variables in working memory, and the transfer of variable values between different threads needs to be done through main memory.
Data storage type and operation mode
Method will be stored directly in the stack frame structure of working memory
Local variables of reference type: references are stored in working memory and actually in main memory
Member variables, static variables, and class information are all stored in main memory
The way of main memory sharing is that each thread copies a copy of the data into the working memory and refreshes it to the main memory after the operation is completed.
Inter-memory interaction
The Java memory model defines eight operations to complete the interaction between main memory and working memory.
Read: transfers the value of a variable from main memory to working memory
Load: execute after read, putting the value obtained by read in the variable copy of working memory
Use: passes the value of a variable in working memory to the execution engine
Assign: a variable that assigns a value received from the execution engine to working memory
Store: transfers the value of a variable in working memory to main memory
Write: execute after store, putting the value obtained by store in the variable of main memory
Lock: the variable acting on the main memory
Unlock
Conditions for reordering instructions
The running result of the program cannot be changed in a single-threaded environment.
Reordering is not allowed if there is a data dependency
Instruction reordering can only be carried out if it cannot be deduced from the Happens-before principle.
Three characteristics of memory model 1. Atomicity
The Java memory model ensures that read, load, use, assign, store, write, lock, and unlock operations are atomic, such as performing assign assignments on a variable of type int, which is atomic. However, the Java memory model allows the virtual machine to divide the read and write operations of 64-bit data (long,double) that are not modified by volatile into two 32-bit operations, that is, load, store, read and write operations can be non-atomic.
There is a misconception that atomic types such as int do not have thread safety problems in a multithreaded environment. In the previous thread-unsafe example code, cnt is a variable of type int, and 1000 threads increment it with a value of 997 instead of 1000.
To facilitate the discussion, the interaction between memory is simplified to three: load, assign, and store.
The following figure shows that two threads operate on cnt at the same time. Load, assign and store are not atomic as a whole, so T2 can still be read into the old value when T1 modifies cnt and has not yet written the modified value into main memory. You can see that although the two threads performed two self-incrementing operations, the value of cnt in main memory ended up with 1 instead of 2. Therefore, the atomicity of int type read and write operations only means that individual operations such as load, assign, and store are atomistic.
AtomicInteger can guarantee the atomicity of multiple thread modifications.
After rewriting previously thread-unsafe code using AtomicInteger, you get the following thread-safe implementation:
Public class AtomicExample {private AtomicInteger cnt = new AtomicInteger (); public void add () {cnt.incrementAndGet ();} public int get () {return cnt.get ();}} copy code public static void main (String [] args) throws InterruptedException {final int threadSize = 1000; AtomicExample example = new AtomicExample (); / / only modify this statement final CountDownLatch countDownLatch = new CountDownLatch (threadSize); ExecutorService executorService = Executors.newCachedThreadPool () For (int I = 0; I
< threadSize; i++) { executorService.execute(() ->{example.add (); countDownLatch.countDown ();});} countDownLatch.await (); executorService.shutdown (); System.out.println (example.get ());} copy code 1000 copy code
In addition to using atomic classes, synchronized mutexes can also be used to ensure atomicity of operations. The corresponding inter-memory operations are lock and unlock, and the corresponding bytecode instructions in the virtual machine implementation are monitorenter and monitorexit.
Public class AtomicSynchronizedExample {private int cnt = 0; public synchronized void add () {cnt++;} public synchronized int get () {return cnt;}} copy code public static void main (String [] args) throws InterruptedException {final int threadSize = 1000; AtomicSynchronizedExample example = new AtomicSynchronizedExample (); final CountDownLatch countDownLatch = new CountDownLatch (threadSize); ExecutorService executorService = Executors.newCachedThreadPool (); for (int I = 0; I)
< threadSize; i++) { executorService.execute(() ->{example.add (); countDownLatch.countDown ();});} countDownLatch.await (); executorService.shutdown (); System.out.println (example.get ());} copy code 1000 copy code 2. Visibility
Visibility means that when one thread modifies the value of a shared variable, other threads are immediately aware of the change. The Java memory model achieves visibility by synchronizing the new value back to the main memory after the variable is modified and refreshing the variable value from the main memory before the variable is read. Implementations within JMM usually rely on so-called memory barriers, providing memory visibility guarantees by prohibiting certain reordering, that is, implementing various happen-before rules. At the same time, more complexity is needed to ensure that processors of various compilers and architectures provide consistent behavior as much as possible.
There are three main ways to achieve visibility:
Volatile forces the state of the variable itself and other variables at that time to be brushed out of the cache.
Synchronized, before performing a unlock operation on a variable, you must synchronize the value of the variable back to main memory.
Final, once the field modified by the final keyword is initialized in the constructor and there is no this escape (other threads access the half-initialized object through the this reference), then other threads can see the value of the final field.
Using volatile decorating the cnt variable in the previous thread-unsafe example does not solve the thread-unsafe problem because volatile does not guarantee the atomicity of the operation.
3. Order
Orderliness means that all operations are orderly when observed within this thread. Looking at another thread in one thread, all operations are out of order because instruction reordering has occurred. In the Java memory model, the compiler and processor are allowed to reorder instructions. The reordering process will not affect the execution of single-threaded programs, but will affect the correctness of multithreaded concurrent execution.
The volatile keyword prohibits instruction rearrangement by adding a memory barrier, that is, reordering cannot put subsequent instructions in front of the memory barrier.
Orderliness can also be ensured through synchronized, which ensures that only one thread executes synchronous code at a time, which is equivalent to letting threads execute synchronous code sequentially.
Prior occurrence principle (Happen-Before)
The JSR-133 memory model uses the preemptive principle to ensure the visibility of multithreaded operations in the Java memory model, and is also a precise definition of the vague concept of visibility in earlier language specifications. It was mentioned above that volatile and synchronized can be used to ensure orderliness. In addition, JVM also stipulates the principle of first occurrence, so that one operation can be completed before another without control.
Due to the existence of instruction reordering, there is a happen-before relationship between the two operations, which does not mean that the former operation must be performed before the latter operation. Only the execution result of the previous operation is required to be visible to the latter operation, and the previous operation comes before the second operation in order.
1. Single Thread principle (programmer order Rule) Single Thread rule
Within a thread, the operation in front of the program occurs first in the later operation.
two。 Pipe lock rules (monitor lock rules) Monitor Lock Rule
An unlock (unlock) operation occurs first in a subsequent lock (lock) operation on the same lock.
3. Volatile variable rule Volatile Variable Rule
A write to a volatile variable occurs first in a subsequent read operation on the variable.
4. Thread startup rule Thread Start Rule
The start () method of the Thread object invokes every action that occurs first on this thread.
5. Thread joins rule Thread Join Rule
The end of the Thread object occurs first when the join () method returns.
6. Thread interrupt rule Thread Interruption Rule
The call to the thread interrupt () method first occurs in the code of the interrupted thread to detect the occurrence of the interrupt event, and the interrupt event can be detected by the interrupted () method.
7. Object termination rule Finalizer Rule
The initialization of an object (the end of constructor execution) occurs first at the beginning of its finalize () method.
8. Transitive Transitivity
If operation An occurs first in operation B, and operation B occurs first in operation C, then operation An occurs first in operation C.
The above is the editor for you to share how to deeply understand the Java memory model JMM, if you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.