In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces "what is the working mode of the Java memory model". In the daily operation, I believe that many people have doubts about the working mode of the Java memory model. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts about "what is the working mode of the Java memory model?" Next, please follow the editor to study!
Introduce cach
In modern computers, the instruction speed of CPU is much faster than the access speed of memory. Because there is a difference of several orders of magnitude between the operation speed of CPU and memory, modern computer systems add a cache (Cache) whose reading and writing speed is as close as possible to the speed of CPU operation as a buffer between memory and processor. The data needed for operation is copied to the cache. CPU operation operates a copy of memory data. When the operation is finished, the copy data is synchronized back to memory from the cache, so that the processor does not have to wait for slow memory reads and writes.
Read cache
When CPU wants to read a data, it first looks it up in the primary cache, if it doesn't find it, then looks it up in the secondary cache, and if it still doesn't, it looks in the tertiary cache or memory.
SMP structure composition
When the computer is running, the first instruction is taken out of the memory, decoded by the controller, the data is taken out of the memory according to the requirements of the instruction for logical operation, and then the result is written back into memory according to the memory address. then the second instruction is taken out and traversed in turn.
SMP logical structure (consistency between cpu and cache)
Single thread: the CPU core cache is accessed by only one thread. The cache is exclusive and there are no access conflicts.
Single-core CPU, multithreading: multiple threads in a process will access shared data in the process at the same time. After CPU loads a block of memory into the cache, different threads will map to the same cache location when accessing the same physical address. In this way, even if thread switching occurs, the cache will not fail, when only one thread is executing, and there will be no access conflicts.
Multi-core CPU, multi-threading: each core has at least one L1 cache. Multiple threads access a shared memory in a process, and each thread executes on a different core, then each core retains a buffer of shared memory in its own Cache. Because multiple cores can be parallel, multiple threads may write their own cache at the same time, while the data between the respective Cache is inconsistent (the root cause of concurrency).
CPU multicore cache architecture (implementation of MESI)
In order to solve this problem, each processor needs to follow some protocols when accessing the cache and operate according to the protocol when reading and writing, which is the MESI protocol (cache consistency protocol).
In different hardware manufacturers and different operating systems, the memory access logic is different, the result is that when your code runs well in a certain system environment, and thread-safe, but there are all kinds of problems when you change the system.
This is because different processors have differences in processor optimization and instruction rearrangement, resulting in the same code, and the final results may be different after different processor optimization and instruction rearrangement. this is unacceptable to us.
JMM arises at the historic moment, and its fundamental purpose is to ensure the security of data and satisfy the visibility, atomicity and order of the scene in the concurrent environment.
State description Monitoring Task M modify (Modified) the Cache line is valid, the data has been modified, and is inconsistent with the data in memory, and the data only exists in this Cache. The cache line must always listen to all attempts to read the cache line relative to the main memory, which must be delayed before the cache writes the cache line back to main memory and changes the state to S (shared) state. E exclusive (Exclusive) the Cache line is valid, the data is consistent with the data in memory, and the data only exists in this Cache. The cache line must also listen to other cache read operations of the cache line in the main memory, and once this occurs, the cache line needs to become S (shared) state. S sharing (Shared) the Cache line is valid, the data is consistent with the data in memory, and the data exists in many Cache. The cache line must also listen to requests from other caches that invalidate or exclusively own the cache line and invalidate the cache line (Invalid). I invalid (Invalid) the Cache line is not valid. CPU-free thread switching causes atomicity problems (atomicity)
Atomicity: the property that one or more operations are regarded as a whole and cannot be interrupted in the process of execution is called atomicity.
In order to maximize the use of CPU,CPU using time slices, switch thread execution. It will lead to atomicity problems in the process of switching threads.
Instruction reordering problem (ordering)
The essence of processes and threads is to increase the number of parallel / concurrent tasks to improve the efficiency of CPU execution, and the essence of cache is to improve the utilization of CPU by reducing IO time.
The original purpose of CPU instruction optimization is to improve the efficiency of CPU instruction execution by adjusting the execution order or asynchronous operation of CPU instructions.
In order to make full use of the computing units within the processor, the processor may optimize the input code out of order, and the processor will reorganize the results of out-of-order execution after calculation. The optimization principle is to ensure that the reordering does not affect the single-thread execution result (As If Serial).
The general logic of reordering is to give priority to the execution of CPU's more time-consuming instructions, and then execute other instructions in the spare time of these instructions. * * there is a similar instruction reordering optimization in the just-in-time compiler of the Java virtual machine.
(compilation operation-(semantic optimization) instruction rearrangement-> processor level rearrangement (instruction optimization)-> memory Yuhua instruction rearrangement (memory analysis optimization)
Memory model
In order to ensure the correctness of the use of shared memory (atomicity, order, visibility), the memory model defines the specification of multi-thread read and write operations in shared memory. Through these rules to regulate the reading and writing of the main memory, so as to ensure the correct execution of the instruction.
It solves the memory access problems caused by CPU multi-level cache, processor optimization and instruction rearrangement, and ensures the consistency, atomicity and order in concurrent scenarios.
Java memory Model (JMM)
JMM is a specification, which aims to solve the problems caused by the inconsistency of working memory data, the reordering of code instructions by compiler, the out-of-order execution of code by processor, and the switching of threads by CPU when multithreads communicate through shared memory.
JMM has been more mature since the release of JSR-133 from java 5.
JMM is in line with the memory model specification, shielding the access differences of various hardware and operating systems, and ensuring that the access to memory of Java programs on various platforms can ensure the same effect. (standard specification like JVM).
The Java thread memory model states that:
All the variables are stored in the main memory, and each thread has its own working memory. The working memory of the thread keeps a copy of the main memory copy of the variables used in the thread. All operations on the variables by the thread must be carried out in the working memory, rather than reading and writing to the main memory directly.
There is no direct access to variables in each other's working memory between different threads, and the transfer of variables between threads requires data synchronization between their own working memory and main memory, which also defines how to do data synchronization and when to do data synchronization.
Memory interactive operation
There are 8 kinds of memory interactive operations, and the virtual machine implementation must ensure that each operation is atomic and cannot be divided:
Lock (locking): a variable that acts on main memory and identifies a variable as thread exclusive
Unlock: a variable acting on main memory that releases a variable in a locked state so that the released variable can be locked by other threads
Read (read): acts on the main memory variable, which transfers the value of a variable from the main memory to the thread's working memory for subsequent load actions to use
Load (load): a variable that acts on working memory, which puts read operations from main memory variables into working memory.
Use (use): acts on a variable in working memory, which transfers the variable in working memory to the execution engine. This instruction is used whenever the virtual machine encounters a value that needs to be used by the variable.
Assign (assignment): a variable that acts on working memory, putting a value received from the execution engine into a copy of the variable in working memory.
Store (storage): a variable acting in main memory that passes the value of a variable from working memory to main memory for subsequent write to use
Write (write): a variable that acts on the main memory and puts the value of the variable obtained by the store operation from the working memory into the variable in the main memory.
JMM has laid down the following rules for the use of these eight instructions:
One of read and load, store and write operations is not allowed to appear alone, even if read is used, load is required, and store is used, write is required.
A thread is not allowed to discard its most recent assign operation, that is, after the data of the work variable has changed, it must tell main memory that a thread is not allowed to synchronize data without assign back to main memory from working memory.
A new variable must be born in main memory, and working memory is not allowed to use an uninitialized variable directly. That is, before implementing use and store operations on variables, you must go through load and assign operations.
Only one thread can lock a variable at a time. After multiple lock, the unlock must be executed the same number of times to unlock.
If you lock a variable, the value of the variable is cleared from all working memory, and the value of the variable must be initialized by the load or assign operation before the execution engine can use it.
If a variable is not lock, it cannot be unlock. Nor can you unlock a variable that is locked by another thread.
Before unlock a variable, you must synchronize the variable back to main memory.
Feature visibility
When multiple threads access the same variable, one thread modifies the value of the variable, and other threads can see the modified value immediately
Java visibility, you can use the volatile keyword, that is, the modified variables are immediately synchronized to the main memory (in fact, reaction time), mainly depending on the memory barrier.
The Synchronized and Final keywords in Java can also achieve visibility.
Atomicity
It means to be treated as a whole in an operation, either executed together or not.
To ensure atomicity, Java provides two advanced bytecode instructions Monitorenter and Monitorexit, and the corresponding keyword is Synchronized
Order
The order of program execution is in the order of code.
Synchronized and volatile can be used to ensure the order of operations between multithreads, but they are implemented in different ways. The volatile keyword forbids instruction rearrangement, and the synchronized keyword ensures that only one thread is allowed to operate at a time.
At this point, we seem to find that Synchronized can satisfy three characteristics at the same time, which is why Synchronized is frequently used, but Synchronized affects performance, although the compiler provides a lot of lock optimization techniques, but overuse is not recommended.
Happen-Before rule
To analyze whether a concurrent program is safe or not, it is more often to rely on Happen-Before principles for analysis.
That is, when operation A first occurs in operation B, the effect of operation A can be observed by B, including modifying the value of shared variables in memory, sending messages, calling methods, and so on.
Program order rules (Program Order Rule): within a thread, the execution rules of the program are consistent with the writing rules of the program, executing from top to bottom.
Pipe locking rule (Monitor Lock Rule): the operation of one Unlock must precede the next Lock operation. This must be the same lock. By the same token, we can assume that when synchronized synchronizes the same lock, the code executed in the lock is fully visible to subsequent threads synchronizing the lock.
Volatile variable rule (volatile Variable Rule): for variables of the same volatile, the first write operation must be earlier than the subsequent read operation.
Thread startup rule (Thread Start Rule): the start () method of the Thread object occurs first in none of the actions of this thread
Thread abort rule (Thread Termination Rule): abort detection of Thread objects (such as Thread.join (), Thread.isAlive (), etc.) must be later than all operations in the thread
Thread interrupt rule (Thread Interruption Rule): an interruption () call to a thread that detects the occurrence of an interrupt event (Thread.interrupted ()) before the called thread
Object abort rule (Finalizer Rule): the initialization method of an object executes the Finalizer () method before a method
Transitivity (Transitivity): if operation A precedes operation B and operation B precedes operation C, operation A precedes operation C.
At this point, the study on "what is the working mode of the Java memory model" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.