Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of Java memory Model

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you the "Java memory model example analysis", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "Java memory model example analysis" this article.

1. Why is there a memory model?

To answer this question, we need to understand the traditional computer hardware memory architecture. All right, I'm going to start drawing.

1.1. Hardware memory architecture

(1) CPU

Students who have been to the computer room know that generally, multiple CPU will be configured on large servers, and each CPU will have multiple cores, which means that multiple CPU or cores can work at the same time. If you start a multithreaded task with Java, it is likely that each CPU will run a thread, then your task will be executed concurrently at some point.

(2) CPU Register

CPU Register is the CPU register. CPU registers are integrated within CPU, and the efficiency of performing operations on registers is several orders of magnitude higher than that on main memory.

(3) CPU Cache Memory

CPU Cache Memory, also known as CPU cache, can also be an L2 secondary cache relative to registers. Compared with the hard disk read speed, the efficiency of memory reading is very high, but it is still an order of magnitude different from CPU, so a multi-level cache is introduced between CPU and main memory in order to do some buffering.

(4) Main Memory

Main Memory is the main memory, which is much larger than the L1 and L2 caches.

Note: some high-end machines also have L3 level 3 cache.

1.2. Cache consistency issu

Because there is an order of magnitude gap between the computing power of the main memory and the CPU processor, the cache is introduced into the traditional computer memory architecture as the buffer between the main memory and the processor. CPU puts the commonly used data in the cache, and after the operation, the CPU will synchronize the operation results to the main memory.

The use of cache solves the problem of mismatch between CPU and main memory rates, but introduces another new problem: cache consistency.

In a multi-CPU system (or a single CPU multicore system), each CPU kernel has its own cache, and they share the same main memory (Main Memory). When multiple CPU computing tasks involve the same main memory area, CPU will read the data into the cache for operation, which may cause their respective cached data to be inconsistent.

Therefore, each CPU needs to follow a certain protocol when accessing the cache, and operate according to the protocol when reading and writing data, so as to maintain the consistency of the cache. Such protocols include MSI, MESI, MOSI, and Dragon Protocol.

1.3. Processor optimization and instruction reordering

Caching is added between CPU and main memory to improve performance, but cache consistency issues may be encountered in multithreaded concurrency scenarios. Is there any way to further improve the efficiency of CPU implementation? The answer is processor optimization.

In order to maximize the full utilization of the computing units within the processor, the processor will disorderly the input code, which is called processor optimization.

In addition to processor optimizations for code, compilers in many modern programming languages do similar optimizations, such as Java's just-in-time compiler (JIT) for instruction reordering.

Processor optimization is also a type of reordering. To sum up, reordering can be divided into three types:

Compiler-optimized reordering. The compiler can rearrange the execution order of statements without changing the semantics of single-threaded programs.

Instruction-level parallel reordering. Modern processors use instruction-level parallel technology to execute multiple instructions overlapped. If there is no data dependency, the processor can change the execution order of the statements corresponding to the machine instructions.

Reordering of memory systems. Because the processor uses caching and read-write buffers, it appears that load and storage operations may be performed out of order.

two。 The problem of concurrent programming

There are a lot of hardware-related things mentioned above, and some students may be a little confused and go around in such a big circle. What does these things have to do with the Java memory model? Don't worry. let's look down slowly.

Students who are familiar with Java concurrency must be familiar with these three issues: "visibility problem", "atomicity problem", and "order problem". If you look at these three problems at a deeper level, they are actually caused by "cache consistency", "processor optimization" and "instruction reordering" mentioned above.

The cache consistency problem is actually a visibility problem, processor optimization may cause atomicity problems, instruction reordering can cause ordering problems, you see whether it is all connected.

Problems always have to be solved, so what can be done? First of all, think of a simple and crude approach. Killing the cache and letting CPU interact directly with main memory solves the visibility problem, and banning processor optimization and instruction reordering solves the problem of atomicity and order, but it is obviously not advisable to go back to pre-liberation overnight.

So the technical seniors came up with the idea of defining a set of memory model on the physical machine to regulate the read and write operation of memory. The memory model mainly uses two ways to solve the concurrency problem: limiting processor optimization and using memory barrier.

3. Java memory model

With the same set of memory model specifications, there may be some differences in implementation among different languages. Next, we will focus on the implementation principle of the Java memory model.

3.1. The relationship between Java Runtime memory area and hardware memory

Students who have known JVM know that the memory area of JVM runtime is fragmented, divided into stack, heap, and so on. In fact, these are all logical concepts defined by JVM. There is no concept of stack and heap in traditional hardware memory architecture.

You can see from the figure that the stack and the heap exist in both the cache and the main memory, so there is no direct relationship between the two.

3.2. The relationship between Java thread and main memory

The Java memory model is a specification that defines a lot of things:

All variables are stored in main memory (Main Memory).

Each thread has a private local memory (Local Memory) in which the thread is stored to read / write copies of shared variables.

All operations on variables by a thread must be performed in local memory and cannot be read or written directly to main memory.

Variables in each other's local memory cannot be accessed directly between different threads.

It was so boring to read the text that I drew another picture:

3.3. Inter-thread communication

If both threads operate on a shared variable, the initial value of the shared variable is 1, each thread is incremented by 1, and the expected value of the shared variable is 3. There will be a series of operations under the JMM specification.

In order to better control the interaction between main memory and local memory, the Java memory model defines eight operations to implement:

Lock: locked. A variable that acts on main memory and identifies a variable as a thread exclusive state.

Unlock: unlock. Acting on the main memory variable, a variable in the locked state is released so that the released variable can be locked by other threads.

Read: read. Acts on the main memory variable to transfer the value of a variable from the main memory to the thread's working memory for subsequent load actions to use

Load: load. A variable that acts on working memory that puts the value of the variable obtained by the read operation from the main memory into a copy of the variable in the working memory.

Use: use. A variable that acts on working memory passes a variable value in working memory to the execution engine, which is performed whenever the virtual machine encounters a bytecode instruction that requires the value of the variable.

Assign: assignment. A variable that acts on working memory that assigns a value received from the execution engine to the variable in working memory, which is performed whenever the virtual machine encounters a bytecode instruction to assign the variable.

Store: storage. A variable that acts on working memory and transfers the value of a variable in working memory to main memory for subsequent write operations.

Write: write. A variable that acts on main memory that transfers store operations from the value of a variable in working memory to the variable in main memory.

Note: working memory means local memory.

4. A summary with an attitude

Due to the quantitative rate difference between CPU and main memory, the traditional hardware memory architecture with multi-level cache is introduced to solve the problem. Multi-level cache is used as the buffer between CPU and main memory to improve the overall performance. It not only solves the problem of rate difference, but also brings the problem of cache consistency.

Data exists in both cache and main memory, which is bound to cause disaster if it is not standardized, so the memory model is abstracted from traditional machines.

Java language introduces JMM specification on the basis of memory model, which aims to solve the problems caused by the inconsistency of local memory data, the reordering of code instructions by compiler and the out-of-order execution of code by processors when multi-threads communicate through shared memory.

In order to control the interaction between working memory and main memory more precisely, JMM also defines eight operations: lock, unlock, read, load,use,assign, store, write.

These are all the contents of the article "sample Analysis of the Java memory Model". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report