Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the memory model in Java concurrent programming

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "what is the memory model in Java concurrent programming". In daily operations, I believe that many people have doubts about what the memory model in Java concurrent programming is. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "what is the memory model in Java concurrent programming?" Next, please follow the editor to study!

1. What is the memory model of Java

The Java memory model is referred to as JMM (Java Memory Model), and JMM is a set of specifications related to multithreading concurrency. Each jvm implementation follows this JMM specification. To ensure that the Java code runs smoothly in different virtual machines. Therefore, JMM is related to processors, caching, concurrency, and compilers. It solves the problem of unpredictable results caused by CPU multi-level cache, processor optimization, instruction rearrangement and so on.

2. Why do you need the Java memory model

The running result of the program depends on the processor, and different processor rules are different, and different processors are very different, so the same code runs normally on processor A, but the result of moving to processor B is different. So in order to be compatible with this difference, the Java memory model specification is introduced. JMM is a standard, and JMM ensures that the processing results of different processors are consistent. It also ensures the consistency of different compilers, jvm, and so on. So it ensures that the Java language is "written once, run everywhere".

3. Java memory model and operation specification.

1. Shared variables are placed in main memory

two。 Each thread has its own working memory, and the thread can only operate its own working memory.

3. To manipulate shared variables, a thread needs to read the working memory from the main memory and synchronize from the working memory to the main memory after changing the value.

4. Atomic operations specified by the Java memory model

The synchronous exchange protocol of Java memory model, which specifies eight atomic operations.

Atomic operation: an uninterruptible operation or series of operations.

Lock (locking): locks variables in main memory and is exclusive to a thread

Unlock (unlock): unlock the lock and give other threads a chance to access this variable

Read (read): acts on the main memory variable to read the variable value in the main memory into the working memory

Load (load): acts on working memory to save the values read by read to a copy of the variable in working memory

Use (use): acts on working memory variables to pass values to the thread's code execution engine

Assign (assignment): acts on working memory variables to reassign the values returned by the execution engine processing to the variable copy

Store (storage): acts on working memory variables to transfer the value of a copy of the variable to main memory

Write (write): acts on the main memory variable to write the value passed by store to the shared variable in the main memory

The synchronous interaction protocol of the Java memory model, the following rules must be met when performing the above eight atomic operations

Read and one of the load,store and write operations are not allowed to appear alone. That is, loading or synchronizing is not allowed to work halfway.

A thread is not allowed to discard its most recent assign operation, that is, after a variable is changed in working memory, it must synchronize the data back to main memory.

A thread is not allowed to synchronize data from working memory to main memory for no reason (no assign operation).

A new variable may be born in main memory.

A variable allows only one thread to perform lock operations on it at a time, but the lock operation can be performed repeatedly by the same thread. After multiple lock, the variable must perform the same number of unlock operations before the variable is unlocked.

If you lock an object, the value in the working memory variable is cleared, and the value of the initial variable of the load or assign operation needs to be re-executed before the execution engine uses this variable.

If an object is not lock in advance, it is not allowed to unlock it, nor is it allowed to unlock a variable locked by another thread.

Before unlock a variable, you must synchronize the variable back into main memory (execute store, write)

5. Java memory model synchronization protocol

The synchronization protocol of Java memory model. To copy a variable from main memory to working memory, read and load operations are performed sequentially; store and write operations are used to synchronize variables from working memory back to main memory. Only sequential execution is required, not necessarily continuous execution

The picture refers to the information on the Internet:

6. HB rule of Java memory model.

There are three important special effects of concurrent programming: atomic rows, visibility, and ordering.

Atomicity: atomicity refers to one or more operations, either all executed without being interrupted by other operations, or none of them are performed.

Visibility: visibility means that shared variables are visible to multiple threads, that is, if one thread modifies the variable, other threads will know immediately.

Orderliness: orderliness means that the execution order of the program is executed according to the sequence of the code.

Before talking about JMM's happens-before (HB) rule, let's talk about the orderliness of concurrent programming. When it comes to the ordering of concurrent threads, instruction reordering is also needed.

What is instruction rearrangement?

If we write a program, we will expect that the actual execution of these statements will be consistent with the order of the code, in most cases, but in fact, the compiler, JVM, or CPU may adjust the order of execution for the purpose of optimization, that is, instruction reordering.

Benefits of reordering: increased processing speed

The code sequence is shown in the figure:

After instruction rescheduling, aq100; a = aq100 will be mentioned to execute together to improve efficiency.

In the above example, the execution efficiency can be improved, but sometimes instruction rearrangement can cause problems. The following code example shows that the code order is to initialize content and then set the identity to true. Thread B calls the content method after it detects that it is true.

If the instruction is rearranged and the initialization is not completed, call the method of conten directly.

Therefore, instruction rearrangement has both advantages and disadvantages. Generally, cpu, compiler or memory will rearrange instructions. In order to avoid instruction rearrangement and ensure the order of concurrent programming, it is sometimes necessary to use locks such as synchronized or volatile to avoid.

1.JMM prescribes the principle of happens-before (first occurrence) to ensure the order of many operations.

two。 When the operation of our code does not meet the principle of first occurrence, we need to use volatile and synchronized to ensure order when coding.

JMM's HB rule

Program order rules: each operation of each thread happens-before any subsequent operations in that thread

Monitor lock rule: the release of a lock, which is subsequently locked by happens-before

Volatile variable rule: write to the volatile field, happens-before to any subsequent reading of the Volly domain

Thread startup rule: any action in a started thread that calls the start () method happens-before on a thread object

Thread termination rule: all operations in a thread occur first in the termination detection of this thread. If t2.join () is successfully executed in thread T1, all operations in T2 are visible to T2.

Thread interrupt rule: the call to the thread interrupt () method occurs first in the code of the interrupted thread to detect the occurrence of the interrupt event

Object termination rule: the initialization of an object (the end of constructor execution) occurs first at the beginning of its finalize method.

Transitivity: if A happens-before at B and B happens-before at C, then A happens-before at C.

At this point, the study on "what is the memory model in Java concurrent programming" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report