In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article shows you how to deeply understand the Java memory model, which is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
Classification of concurrent programming models
In concurrent programming, we need to deal with two key issues: how to communicate between threads and how to synchronize between threads (threads here are active entities that execute concurrently). Communication refers to the mechanism by which threads exchange information. In imperative programming, there are two communication mechanisms between threads: shared memory and message passing.
In the concurrency model of shared memory, threads share the common state of programs, and threads communicate implicitly by writing to and reading the common state in memory. In the concurrency model of messaging, there is no common state between threads, and threads must communicate explicitly by sending messages explicitly.
Synchronization is a mechanism that programs use to control the relative order of operations between different threads. In the shared memory concurrency model, synchronization is explicit. The programmer must explicitly specify that a method or piece of code needs to be mutually exclusive between threads. In the concurrency model of message delivery, synchronization is implicit because the message must be sent before it is received.
The concurrency of Java adopts the shared memory model, the communication between Java threads is always implicit, and the whole communication process is completely transparent to programmers. If Java programmers writing multithreaded programs do not understand the workings of implicit communication between threads, they are likely to encounter all sorts of strange memory visibility problems.
Abstraction of Java memory Model
In java, all instance fields, static fields, and array elements are stored in heap memory, and shared among threads within the heap (this article uses the term "shared variable" to refer to instance fields, static fields, and array elements). Local variables (Local variables), method definition parameters (called formal method parameters by the java language specification), and exception handler parameters (exception handler parameters) are not shared between threads, and they are not affected by memory visibility issues or memory models.
Communication between Java threads is controlled by the Java memory model (JMM for short in this article), and JMM determines when writes by one thread to shared variables are visible to another thread. From an abstract point of view, JMM defines the abstract relationship between threads and main memory: shared variables between threads are stored in main memory (main memory), each thread has a private local memory (local memory), and the thread is stored in local memory to read / write copies of the shared variables. Local memory is an abstract concept of JMM and does not really exist. It covers caching, write buffers, registers and other hardware and compiler optimizations. The abstract diagram of the Java memory model is as follows:
From the figure above, if you want to communicate between thread An and thread B, you must go through the following two steps:
First, thread A flushes the updated shared variables in local memory A to main memory.
Thread B then goes to main memory to read the shared variables that have been updated by thread A.
The following is a diagram to illustrate these two steps:
As shown in the figure above, local memory An and B have copies of the shared variable x in main memory. Assume that initially, the x value in all three memory is 0. When thread An executes, it temporarily stores the updated x value (assuming 1) in its own local memory A. When thread An and thread B need to communicate, thread A first flushes the modified x value in its local memory to the main memory, and the x value in the main memory becomes 1. Then, thread B reads the updated x value of thread An into the main memory, and the x value of thread B's local memory becomes 1.
Taken as a whole, these two steps are essentially thread A sending a message to thread B, and the communication process must pass through the main memory. JMM provides memory visibility guarantees for java programmers by controlling the interaction between the main memory and the local memory of each thread.
Reorder
Compilers and processors often reorder instructions in order to improve performance when executing programs. There are three types of reordering:
Compiler-optimized reordering. The compiler can rearrange the execution order of statements without changing the semantics of single-threaded programs.
Instruction-level parallel reordering. Modern processors use instruction-level parallel technology (Instruction-Level Parallelism, ILP) to execute multiple instructions overlapped. If there is no data dependency, the processor can change the execution order of the statements corresponding to the machine instructions.
Reordering of memory systems. Because the processor uses caching and read / write buffers, it appears that load and storage operations may be performed out of order.
From the java source code to the sequence of instructions actually executed, there are three kinds of reordering:
The above 1 belongs to compiler reordering, and 2 and 3 belong to processor reordering. All of these reordering can cause memory visibility problems in multithreaded programs. For compilers, JMM's compiler reordering rules prohibit specific types of compiler reordering (not all compiler reordering is prohibited). For processor reordering, JMM's processor reordering rules require the java compiler to insert specific types of memory barrier (what memory barriers,intel calls memory fence) instructions when generating instruction sequences, using memory barrier instructions to prohibit specific types of processor reordering (not all processor reordering is prohibited).
JMM is a language-level memory model that ensures consistent memory visibility for programmers by prohibiting specific types of compiler reordering and processor reordering on different compilers and different processor platforms.
Processor reordering and memory barrier instructions
Modern processors use write buffers to temporarily store data written to memory. The write buffer ensures that the instruction pipeline runs continuously and avoids the delay caused by the processor pausing to write data to memory. At the same time, the occupation of the memory bus can be reduced by flushing the write buffer in batches and merging multiple writes to the same memory address in the write buffer. Although the write buffer has so many benefits, the write buffer on each processor is visible only to the processor on which it resides. This feature has an important impact on the order in which memory operations are performed: the order in which the processor reads / writes to memory is not necessarily consistent with the order in which memory actually occurs! To illustrate, take a look at the following example:
Processor AProcessor Ba = 1; / / A1
X = b; / / A2b = 2; / / B1
Y = a; / B2 initial state: a = b = 0 processor allows execution to get the result: X = y = 0
Suppose processor An and processor B perform memory access in parallel in the order of the program, but you may end up with a result of x = y = 0. The specific reasons are shown in the following figure:
Here, processor An and processor B can simultaneously write shared variables to their own write buffer (A1Magi B1), then read another shared variable from memory (A2PowerB2), and finally flush the dirty data stored in their own write cache into memory (A3dB3). When executed in this timing, the program can get the result of x = y = 0.
Judging from the order in which memory operations actually occur, write operation A1 is not really executed until processor An executes A3 to refresh its own write cache. Although processor A performs memory operations in the order of A1-> A2, the order in which memory operations actually occur is A2-> A1. At this point, the memory operation order of processor An is reordered (processor B is the same as processor A, so I won't repeat it here).
The key here is that because the write buffer is visible only to its own processor, it can cause the order in which the processor performs memory operations may be inconsistent with the order in which memory operations are actually performed. Because modern processors use write buffers, modern processors allow reordering of write-read operations.
The following is a list of the types of reordering allowed by common processors:
The "N" in the upper table cell indicates that the processor does not allow reordering of two operations, and "Y" indicates that reordering is allowed.
We can see from the above table that common processors allow Store-Load reordering; common processors do not allow reordering of operations that have data dependencies. Sparc-TSO and x86 have relatively strong processor memory models that only allow reordering of write-read operations (because they both use write buffers).
Note 1:sparc-TSO refers to the characteristics of the sparc processor when running with the TSO (Total Store Order) memory model.
Note 2: x86 in the above table includes x64 and AMD64.
Note 3: because the memory model of the ARM processor is very similar to that of the PowerPC processor, this article will ignore it.
Note 4: data dependencies will be described later.
To ensure memory visibility, the java compiler inserts memory barrier instructions where the instruction sequence is generated to prevent certain types of processors from reordering. JMM classifies memory barrier instructions into the following four categories:
StoreLoad Barriers is an "all-powerful" barrier, which has the effect of three other barriers at the same time. Most modern multiprocessors support this barrier (other types of barriers are not necessarily supported by all processors). Performing this barrier can be expensive because the current processor usually flushes all the data in the write buffer into memory (buffer fully flush).
Happens-before
Starting with JDK5, java uses the new JSR- 133 memory model (unless otherwise noted in this article, it is for the JSR- 133 memory model). JSR-133 proposed the concept of happens-before, which is used to explain the memory visibility between operations. If the result of one operation needs to be visible to another operation, then there must be a happens-before relationship between the two operations. The two operations mentioned here can be either within one thread or between different threads. Happens-before rules that are closely related to programmers are as follows:
Program order rules: each operation in a thread, happens- before any subsequent operations in that thread.
Monitor lock rule: unlock a monitor lock, and happens- before then locks the monitor lock.
Volatile variable rule: write to a volatile domain, happens- before any subsequent reads to that volatile domain.
Transitivity: if A happens- before B and B happens- before C, then A happens- before C.
Note that just because there is a happens-before relationship between two operations does not mean that the previous operation must be performed before the latter one! Happens-before only requires the previous operation (the result of execution) to be visible to the latter operation, and the previous operation comes before the second operation in order (the first is visible to and ordered before the second). The definition of happens-before is subtle and will be explained later on why happens-before is defined in this way.
The relationship between happens-before and JMM is shown in the following figure:
As shown in the figure above, a happens-before rule usually corresponds to multiple compiler reordering rules and processor reordering rules. For java programmers, happens-before rules are easy to understand and prevent programmers from learning complex reordering rules and their implementation in order to understand the memory visibility guarantees provided by JMM.
The above content is how to deeply understand the Java memory model. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.