In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "how to deeply understand java memory model". In the operation process of actual cases, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
Classification of concurrent programming models
In concurrent programming, we need to deal with two key issues: how threads communicate with each other and how threads synchronize with each other (where threads are active entities executing concurrently). Communication refers to the mechanism by which threads exchange information. In imperative programming, there are two communication mechanisms between threads: shared memory and message passing.
In the shared-memory concurrency model, threads share the program's common state and communicate implicitly by writing and reading the common state in memory. In the concurrency model of messaging, there is no common state between threads, and threads must communicate explicitly by explicitly sending messages.
Synchronization is the mechanism that a program uses to control the relative order in which operations occur between different threads. In the shared-memory concurrency model, synchronization occurs explicitly. The programmer must explicitly specify that a method or piece of code needs to be executed exclusively between threads. In the concurrency model of messaging, synchronization is implicit because the message must be sent before it is received.
Abstraction of Java memory model
In java, all instance fields, static fields, and array elements are stored in heap memory, which is shared between threads (the term "shared variables" is used in this article to refer to instance fields, static fields, and array elements). Local variables, method definition parameters (what the java specification calls formal method parameters), and exception handler parameters are not shared between threads, have no memory visibility issues, and are not affected by the memory model.
Communication between Java threads is governed by the Java Memory Model (JMM for short in this article), which determines when one thread's writes to shared variables are visible to another thread. From an abstract point of view, JMM defines an abstract relationship between threads and main memory: shared variables between threads are stored in main memory, and each thread has a private local memory where copies of shared variables are stored for that thread to read/write. Local memory is an abstract concept of JMM and does not really exist. It covers caching, write buffers, registers, and other hardware and compiler optimizations. An abstract diagram of the Java memory model is as follows:
From the above diagram, if thread A and thread B want to communicate, they must go through the following two steps:
First, thread A flushes updated shared variables from local memory A to main memory.
Thread B then goes to main memory to read shared variables that thread A has updated before.
These two steps are illustrated schematically below:
As shown above, local memories A and B have copies of the shared variable x in main memory. Assume that initially, the x values in all three memories are 0. Thread A temporarily stores the updated value of x (assumed to be 1) in its own local memory A while executing. When thread A and thread B need to communicate, thread A first flushes the modified x value from its local memory to main memory, where the x value becomes 1. Thread B then goes to main memory to read thread A's updated x value, and thread B's local memory x value also becomes 1.
Taken as a whole, these two steps are essentially Thread A sending messages to Thread B, and this communication must pass through main memory. JMM provides memory visibility guarantees for Java programmers by controlling the interaction between main memory and each thread's local memory.
reordering
Compilers and processors often reorder instructions to improve performance when executing programs. There are three types of reordering:
Compiler-optimized reordering. The compiler can rearrange the execution order of statements without changing the semantics of a single-threaded program.
Instruction level parallel reordering. Modern processors employ Instruction-Level Parallelism (ILP) to overlap multiple instructions. If there are no data dependencies, the processor can change the order of execution of the machine instructions corresponding to the statement.
Reordering of memory systems. Because the processor uses cache and read/write buffers, this makes load and store operations appear likely to be performed out of order.
From java source code to the actual execution of the instruction sequence, there are three kinds of reordering:
1 above belongs to compiler reordering, 2 and 3 belong to processor reordering. Both of these reorders can cause memory visibility problems in multithreaded programs. For compilers, JMM's compiler reordering rules prohibit certain types of compiler reordering (not all compiler reordering is prohibited). For processor reordering, JMM's processor reordering rules require the Java compiler to insert certain types of memory barriers (Intel calls them memory fences) when generating instruction sequences, which prohibit certain types of processor reordering (not all processor reordering is prohibited).
JMM is a language-level memory model that ensures consistent memory visibility guarantees for programmers across compilers and processor platforms by prohibiting certain types of compiler reordering and processor reordering.
Processor Reordering and Memory Barrier Instructions
Modern processors use write buffers to temporarily store data written to memory. The write buffer keeps the instruction pipeline running and avoids delays caused by the processor stalling waiting to write data to memory. At the same time, memory bus usage can be reduced by flushing write buffers in a batch manner and combining multiple writes to the same memory address in the write buffer. Despite all the benefits of write buffers, the write buffer on each processor is visible only to the processor on which it resides. This feature has an important effect on the order in which memory operations are executed: the order in which the processor reads/writes memory may not coincide with the order in which the memory reads/writes actually occur! To illustrate, consider the following example:
Processor AProcessor Ba = 1; //A1
x = b; //A2b = 2; //B1
y = a; //B2 Initial state: a = b = 0
Processor allows execution to result in: x = y = 0
Assuming that processor A and processor B perform memory accesses in parallel in program order, it may end up with x = y = 0. The specific reasons are shown in the following figure:
Here, processor A and processor B can simultaneously write a shared variable to their own write buffer (A1, B1), then read another shared variable from memory (A2, B2), and finally flush dirty data stored in their own write buffer to memory (A3, B3). When executed in this order, the program can obtain x = y = 0.
In the order in which the memory operations actually occur, write A1 is not actually performed until processor A executes A3 to flush its write buffer. Although processor A performs memory operations in the order A1->A2, memory operations actually occur in the order A2->A1. At this point, processor A's memory operations are reordered (processor B is the same as processor A, so I won't go into detail here).
The key here is that since the write buffer is visible only to its own processor, it causes the processor to perform memory operations in a sequence that may not correspond to the actual execution sequence of memory operations. Because modern processors use write buffers, modern processors allow reordering of write-read operations.
The following is a list of the types of reordering allowed by common processors:
Load-LoadLoad-Store-StoreStore-Load Data Dependency sparc-TSONNNYNx86NNNYNia64 YNPowerPCYN
An "N" in the cell above indicates that the processor does not allow two operations to be reordered, and a "Y" indicates that reordering is allowed.
From the above table we can see that common processors allow Store-Load reordering; common processors do not allow reordering of operations with data dependencies. Sparc-TSO and x86 have relatively strong processor memory models that allow only write-read reordering (because they both use write buffers).
Note 1: Sparc-TSO refers to the characteristics of sparc processors when running on the TSO(Total Store Order) memory model.
Note 2: x86 in the above table includes x64 and AMD64.
Note 3: Because the memory model of the ARM processor is very similar to the memory model of the PowerPC processor, this article will ignore it.
Note 4: Data dependency will be explained later.
To ensure memory visibility, the java compiler inserts memory barrier instructions at appropriate locations in the generated instruction sequence to prohibit certain types of processor reordering. JMM classifies memory barrier instructions into four categories:
Example Description of Barrier Type Instructions LoadLoad BarriersLoad1; LoadLoad; Load2 ensures the loading of Load1 data prior to Load2 and all subsequent load instructions. StoreStore BarriersStore1; StoreStore; Store2 Ensure that Store1 data is visible to other processors (flushed to memory) prior to storage in Store2 and all subsequent store instructions. LoadStore BarriersLoad1; LoadStore; Store2 ensures that Load1 data is loaded, flushed to memory prior to Store2 and all subsequent store instructions. StoreLoad BarriersStore1; StoreLoad; Load2 ensures that Store1 data becomes visible to other processors (meaning flushed into memory) prior to loading on Load2 and all subsequent load instructions. StoreLoad Barriers causes all memory access instructions (store and load instructions) before the barrier to complete before executing memory access instructions after the barrier.
StoreLoad Barriers is an "all-purpose" barrier that has the effects of three other barriers. Most modern multiprocessors support this barrier (other types of barriers are not necessarily supported by all processors). Implementing this barrier can be expensive because current processors typically buffer fully flush data from write buffers into memory.
happens-before
Starting with JDK5, java uses the new JSR-133 memory model (this article, unless otherwise noted, addresses the JSR- 133 memory model). JSR-133 introduces the concept of happens-before, which is used to illustrate memory visibility between operations. If the result of one operation needs to be visible to another operation, then there must be a happens-before relationship between the two operations. The two operations mentioned here can be either within a thread or between different threads. Happens-before rules that are closely related to programmers are as follows:
Program order rule: Every operation in a thread happens- before any subsequent operation in that thread.
Monitor lock rule: unlocking of a monitor lock happens- before subsequent locking of the monitor lock.
volatile variable rule: a write to a volatile field happens- before any subsequent read to the volatile field.
Transitivity: If A happens- before B and B happens- before C, then A happens- before C.
Note that having a happens-before relationship between two operations does not mean that the first operation must be performed before the second! happens-before only requires that the first action (the result of execution) be visible to the second action, and that the first action be visible to and ordered before the second action. The definition of happens-before is subtle, and we will explain why happens-before is defined this way later.
The relationship between happens-before and JMM is shown below:
As shown above, a happens-before rule usually corresponds to multiple compiler and processor reorder rules. Happens-before rules are simple and easy to understand for Java programmers, avoiding the need to learn complex reordering rules and their implementation in order to understand the memory visibility guarantees provided by JMM.
"How to understand Java memory model" content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.