Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the principle of Java memory model?

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces the relevant knowledge of "what is the principle of Java memory model". Xiaobian shows you the operation process through actual cases. The operation method is simple, fast and practical. I hope this article "what is the principle of Java memory model" can help you solve the problem.

internals

JVM attempts to define a JMM to shield memory access differences between various hardware and operating systems, so that Java programs can achieve consistent memory access effects on various platforms.

The main goal of JMM is to define access rules for individual variables in a program, i.e., the low-level details of storing and retrieving variables from memory in a virtual machine. Variables here are different from variables in Java programming, which include instance fields, static fields, and elements that make up array objects, but do not include local variables and method parameters, because the latter are private to threads and will not be shared, and naturally there will be no competition problems. To achieve better execution performance, the Java memory model does not restrict the execution engine from using processor specific registers or caches to interact with main memory, nor does it restrict the compiler from making optimizations such as adjusting code execution order.

JMM is built around how atomicity, visibility, and orderliness are handled in concurrent processes.

JMM is defined by various operations, including reading and writing to variables, locking and releasing monitors, and starting and merging threads.

Memory model structure

Java memory model divides Java virtual machine into thread stack and heap.

thread stack

Each thread running in the Java virtual machine has its own thread stack. This thread stack contains information about the current execution point of the method called by this thread. A thread can only access its own thread stack. A thread creates local variables that are invisible to other threads, visible only to itself. Even if two threads execute the same code, both threads still create local variables in code in their own thread stack. Thus, each thread has a unique version of each local variable.

All native variables of primitive type are stored on the thread stack and therefore invisible to other threads. One thread may pass a copy of an original type variable to another thread, but it cannot share the original type variable itself.

pile

The heap contains all the objects created in the Java program, regardless of which object created it. This includes versions of objects of the original type. If an object is created and then assigned to a local variable, or used as a member variable of another object, the object remains on the heap.

A local variable may be of primitive type, in which case it is always on the thread stack.

A local variable may also be a reference to an object. In this case, the reference (the local variable) is stored on the thread stack, but the object itself is stored on the heap.

An object may contain methods, which may contain local variables. These local variables are still stored on the thread stack, even if the objects to which these methods belong are stored on the heap.

The member variables of an object may be stored on the heap along with the object itself. Whether the member variable is of primitive type or reference type.

Static member variables are stored on the heap along with the class definition.

An object stored on the heap can be accessed by all threads that hold references to this object. When a thread can access an object, it can also access the object's member variables. If two threads call the same method on the same object at the same time, they will both access the object's member variables, but each thread has a private copy of the local variable.

Hardware Memory Architecture

The modern hardware memory model differs somewhat from the Java memory model. It is also important to understand the memory model architecture and how Java memory models work with it. This section describes the general hardware memory architecture, and the following section describes how Java memory works in conjunction with it.

A modern computer usually consists of two or more CPUs. Some of these CPUs also have multiple cores. From this, it is possible to run multiple threads simultaneously on a modern computer with two or more CPUs. There is no problem with each CPU running one thread at a time. This means that if your Java program is multithreaded, one thread on each CPU in your Java program may be executing simultaneously.

Each CPU contains a series of registers that form the basis of memory within the CPU. The CPU performs operations on registers much faster than on main memory. This is because the CPU accesses registers much faster than main memory.

Each CPU may also have a CPU cache layer. In fact, most modern CPUs have a cache layer of some size. CPU accesses cache layer faster than main memory, but usually slower than internal registers. Some CPUs also have multiple levels of cache, but these are not important to understanding how the Java memory model interacts with memory. Just know that there can be a cache layer in the CPU.

A computer also contains a main memory. All CPUs have access to main memory. Main memory is usually much larger than cache memory in the CPU.

Normally, when a CPU needs to read main memory, it reads the portion of main memory into the CPU cache. It may even read parts of the cache into its internal registers and perform operations in those registers. When the CPU needs to write results back to main memory, it flushes the values of internal registers into cache, and then flushes the values back to main memory at some point.

When the CPU needs to store something in the cache layer, the contents of the cache are usually flushed back to main memory. A CPU cache can write data locally to its memory at a time and flush its memory locally at a time. It does not read/write the entire cache at any one time. Usually, the cache is updated in a smaller block of memory called cache lines. One or more cache lines may be read into cache, and one or more cache lines may be flushed back into main memory.

Bridging between JMM and Hardware Memory Architecture

As mentioned above, there are differences between Java memory model and hardware memory architecture. Hardware memory architecture does not distinguish between thread stacks and heaps. For hardware, all thread stacks and heaps are distributed in the main. Parts of thread stacks and files may sometimes appear in CPU caches and internal CPU registers. As shown below:

Specific problems can arise when objects and variables are stored in various memory areas of a computer. It mainly includes the following two aspects:

Visibility of thread to shared variable modifications

Race conditions appear when reading, writing, and checking shared variables

Shared Object Visibility

If two or more threads share an object without proper use of volatile declarations or synchronization, updates to the shared object by one thread may be unacceptable to the other threads.

Imagine shared objects being initialized in main memory. A thread running on the CPU reads the shared object into the CPU cache. And then modified the object. As long as the CPU cache is not flushed, the modified version of the object is invisible to threads running on other CPUs. This approach may result in each thread having a private copy of the shared object, each copy residing in a different CPU cache.

The diagram above illustrates this situation. The thread running on the left CPU copies the shared object into its CPU cache and changes the value of the count variable to 2. This modification is invisible to other threads running on the right CPU because the modified count value has not yet been flushed back into main memory.

To solve this problem you can use the volatile keyword in Java. The volatile keyword guarantees that a variable is read directly from main memory and that if it is modified, it will always be written back to main memory.

race conditions

Race conditions can occur if two or more threads share an object and multiple threads update variables on the shared object.

Imagine if thread A reads a variable count of a shared object into its CPU cache. Imagine thread B doing the same thing, but into a different CPU cache. Thread A now increments count by 1, and thread B does the same. Count has now been incremented to two, once in each CPU cache.

If these increments are performed sequentially, the variable count should be incremented twice, and then the original value +2 is written back to main memory.

However, both additions were performed concurrently without proper synchronization. Whether thread A or thread B writes the modified version of count back to main memory, the modified value is only incremented by 1, albeit twice.

This problem can be solved using Java synchronization blocks. A synchronization block ensures that only one thread can enter a critical section of code at a time. A synchronized block also ensures that all accessed variables in the block will be read from main memory, and that all updated variables will be flushed back into main memory when the thread exits the synchronized block, regardless of whether the variable is declared volatile or not.

Happens-Before

JMM defines a partial ordering relationship for all operations in a program called Happens-Before.

Program order rule: If operation A precedes operation B in a program, operation A will execute before operation B in a thread.

Monitor Lock Rule: An unlock operation on a monitor lock must be performed before a lock operation on the same monitor lock.

volatile variable rule: A write to a volatile variable must precede a read from the variable.

Thread start rule: A call to Thread.start on a thread must be executed before any operations are performed in that thread.

Thread End Rule: Any operation in a thread must be executed before another thread detects that the thread has ended, either returning successfully from Thread.join, or returning false when Thread.isAlive is called.

Interrupt rule: When a thread calls interrupt on another thread, it must execute before the interrupted thread detects the interrupt call (by throwing InterruptException, or calling isInterrupted and interrupted).

Finalizer rule: An object's constructor must complete execution before starting the object's finalizer.

Transitivity: If operation A is performed before operation B, and operation B is performed before operation C, then operation A must be performed before operation C.

About "Java memory model principle is what" content introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the industry information channel. Xiaobian will update different knowledge points for you every day.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report