Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to analyze the implementation principle of volatile and synchronized

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces you how to analyze the principles of volatile and synchronized, the content is very detailed, interested friends can refer to, hope to be helpful to you.

Preface

Both volatile and synchronized play important roles in java concurrent programming. Both play the same role: ensuring thread visibility of shared variables. Compared with synchronized, volatile can be thought of as a lightweight synchronized, without thread context switching and debugging, and its performance is much better than synchronized. However, it should be noted that volatile variables do not guarantee thread safety during compound operations, on the contrary, sychronized can. Let's take a look at how volatile and synchronized are implemented from the bottom.

Volatile

Before we introduce volatile, let's briefly introduce the memory model of java.

Public int i = 1

Suppose the object has a property field I with an initial value of 1 and the object is on the heap. We usually think of the heap as main memory, where two different threads access field I. In modern operating systems, each thread is assigned a separate processor cache, which is used to cache some data, so that the corresponding data can be obtained without having to access the main memory again, which can improve the efficiency. Look at the picture below.

This can improve efficiency, but it also brings a problem: when modifying data, the data of each thread is inconsistent.

This can improve efficiency, but it also brings a problem: when modifying data, the data of each thread is inconsistent.

So at this point we need to make sure that when thread 1 modifies a shared variable, other threads accessing the shared variable can perceive the change. And this function can be done by volatile. Let's take a look at exactly how volatile works.

Volatile can only be used to modify variables. Code:

Volatile public int i = 1

When the volatile variable I is assigned to 2, thread 1 does two things:

Update the main memory.

Send a modification signal to the CPU bus.

At this time, the processor monitoring the CPU bus will receive the modification signal, and if it finds that the modified data has been cached by itself, it will invalidate its own cached data. In this way, when other threads access the cache, they know that the cached data is invalid and need to be fetched from the main memory. In this way, the shared variable I in all threads is consistent.

So volatile can also be seen as a cheap way to communicate between threads.

Synchronized

Synchronized has always played a veteran role in multithreaded concurrent programming, and many people will call it a heavyweight lock. However, with the various optimizations made to synchronized by Java SE 1.6, it is not that heavy when it is caused. The following is a detailed introduction of this aspect.

The basis for synchronized synchronization is that every object in Java can be used as a lock. So all the objects locked by synchronized are different, but the objects locked in different forms are different.

For normal synchronization methods, the current instance object is locked.

For static synchronization methods, the Class object of the current class is locked.

For synchronous method blocks, locks are the objects configured in Synchronized parentheses.

When a thread tries to access the synchronous generation, it must acquire the lock before it can execute the code logic, and the lock must be released when it exits. So how on earth is the lock realized?

It is specified in the JVM specification that synchronized synchronizes methods and code blocks through Monitor objects, but the implementation details are a little different. Code block synchronization uses monitorenter and monitorexit instructions, and method synchronization is implemented in another way, which is not detailed in the JVM specification. However, the synchronization of methods can also be achieved using these two instructions.

The monitorenter instruction is compiled and inserted at the beginning of the synchronous code block, while the monitorexit instruction is inserted at the end of the method and at the exception. JVM ensures that each monitorenter has a corresponding monitorexit. Any object has a monitor associated with it, and when a monitor is held, the object will be locked. When the thread executes the monitorenter instruction, it attempts to acquire the ownership of the corresponding monitor of the object, that is, to acquire the lock of the object.

So where on earth is the lock that we keep talking about? Far and near the horizon, near in front of us. It is mentioned above that the object can be used as a lock, but the lock is in the object, exactly in the Mark World structure of the object header. For more information about the structure of object headers, please refer to Lin Lin: talk about the memory layout of java objects

Here's a brief description:

The object header is divided into two parts: Mark Word and Class Pointer (type pointer).

Mark Word stores three parts of object hashCode, GC information and lock information, and Class Pointer stores pointers to class object information. The size of the object header on 32-bit JVM is 8 bytes, while that of 64-bit JVM is 16 bytes, and the two types of Mark Word and Class Pointer each account for half the space.

The following figure shows the object header above the 32-bit JVM. The first 25bit is hashCode, 4bit is GC information, and the last two bits are the biased lock flag and the lock status flag, respectively. The storage content changes when Mark Word locks at different levels.

It is mentioned above that Java SE 1.6 makes various optimizations for synchronized. What does this optimization mean? Synchronized is not necessarily a heavy lock, it is divided into three types according to the heavyweight of the lock, from low to high: bias lock, lightweight lock, heavy lock. The figure above shows the contents of each lock stored in the Mark Word. The application scenario and upgrade process of each lock are described below.

Biased lock: after research, it is found that in an environment or business where multi-thread competition is not fierce, a lock is always acquired by the same thread multiple times. In this way, there is no need to generate heavy locks and repeat locks, because this can be costly. So at this time, the problem of high cost can be solved by introducing bias lock. When a thread tries to acquire a lock, it stores a biased lock pointing to the current thread (CAS operation) in the lock record of the object header and stack frame, and sets the biased lock flag bit to 1 (CAS operation). After that, the thread locks the same object and simply tests whether there is a bias lock pointing to the current thread in the object header, and there is no need to actually perform the locking operation. At this time, when other threads try to acquire the lock, the CAS operation cannot acquire the lock, so it has to wait for the original thread to cancel the lock before it can compete for the lock. Undo with lock bias cannot be undone until the global security point, which means that other threads may have to wait a long time. Therefore, in the case of fierce competition, the lock is not very suitable. At this point, you should upgrade to lightweight locks.

Lightweight lock: before a thread executes a synchronization block, JVM creates a space to store lock records in the stack frame of the current thread, and copies the Mark World in the object header to the lock record space of the thread stack frame. Then, use the CAS operation to replace the Mark World of the object header with a pointer to the lock record space. Success means that the lock has been acquired, and the failure will spin temporarily and wait for other threads to unlock it. If the spin time turns out that the lock cannot be obtained, there are only two situations: (1) the competition is extremely fierce and (2) the execution time of the synchronous code is too long. At this time, it is not cost-effective to spin, because not only can not get the lock quickly, but also a waste of CPU. This scenario of lightweight locks is not appropriate, it is better to upgrade to heavyweight locks.

Heavyweight locks: the so-called heavyweight locks are actually the original and original blocking locks implemented by java. It is also called object monitor in JVM. At this time, the object header field of the lock object points to a mutex, and all threads compete for heavy locks, and the failed threads enter a blocking state (at the operating system level) and wait in a waiting pool of the lock object to be awakened. The awakened thread competes for lock resources again.

So biased locks, lightweight locks and heavyweight locks are suitable for different competitive environments.

On how to analyze volatile, synchronized implementation principles to share here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report