Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of synchronized Lock upgrade in java

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

Editor to share with you the example analysis of synchronized lock upgrade in java, I believe that most people do not know much about it, so share this article for your reference, I hope you will learn a lot after reading this article, let's go to know it!

Upgrade of synchronized lock (bias lock, lightweight lock and heavyweight lock) java synchronous lock pre-knowledge point

1. If locks are used in coding, you can use the synchronized keyword to lock methods and code blocks synchronously.

The 2.Synchronized synchronization lock is an implicit lock built into jvm (implicit locking and release as opposed to Lock)

The implementation of 3.Synchronized synchronization lock depends on the operating system. Acquiring the lock and releasing the lock for system call will cause the switch between user mode and kernel state.

Locking before 4.jdk1.5 can only use synchronized,1.6 to introduce Lock synchronous lock (request lock is based on java implementation, explicit locking and release, better performance)

5.jdk1.6 puts forward the concept of bias lock, lightweight lock and heavy lock for Synchronzied synchronization lock (in fact, it optimizes the performance of synchronized to reduce the context switch caused by lock competition as much as possible)

6. Whether using synchronized or Lock, thread context switching is inevitable

One of the performance optimizations of 7.Lock over synchronized is that when threads are blocked, Lock acquiring locks does not cause a switch between user mode and kernel mode, while synchronized does (see point 3). But thread blocking can lead to context switching (see point 6)

The blocking and awakening of 8.java threads depend on operating system calls, resulting in switching between user mode and kernel mode.

9. The switch between user mode and kernel mode mentioned earlier occurs in the context of a process rather than in a thread.

This article focuses on the upgrade of synchronized locks.

Synchronized synchronization lock java object header

Each java object has an object header, which consists of type pointers and tag fields.

In a 64-bit virtual machine, the compression pointer is not turned on, the tag field occupies 64 bits, and the type pointer occupies 64 bits, a total of 16 bytes.

The lock type information is the last two bits of the tag field: 00 indicates a lightweight lock, 01 indicates no lock or bias lock, and 10 indicates a heavyweight lock; if the penultimate bit 1 indicates that the biased lock of this class is enabled, 0 indicates that the biased lock of the class is disabled.

As shown in the following figure, the source of the picture is wiki

The column on the left indicates that the bias lock is enabled (box 1), and the column on the right indicates that the bias lock is disabled (box 3). Both 1 and 3 represent the initial state without locks. If biased locks are enabled, the lock escalation step should be 1-> 2-> 4-> 5. If biased locks are disabled, lock escalation steps should be 3-> 4-> 5.

I use jdk8, print the parameters and take a look. The default is to enable bias locks, such as if:-XX:-UseBiasedLocking is disabled.

There are several other parameters for bias locks:

Note the BiasedLockingStartupDelay parameter, which defaults to 4000ms, which means that the delay of startup of the virtual machine is 4s before the bias lock is used (lightweight lock is used first).

Bias lock

The scenario where lock processing is biased is that only the same thread is requesting the lock most of the time, and there is no multi-thread contention for the lock. Looking at the red box 2 of the object header diagram, there is a thread ID field: when the thread is locked for the first time, jvm sets the current thread address to the thread ID flag bit through cas, and the last three bits are 101bit. The next time the same thread acquires the lock, you only need to check whether the last three bits are 101, whether it is the current thread, and whether the epoch is equal to the epoch of the class of the lock object (wiki says that no cas is set again to optimize for cas operations on current multiprocessors).

The performance improvement caused by biased lock optimization means that the switching between user mode and kernel state caused by acquiring locks for system calls is avoided, because the lock is acquired by the same thread, so it is not necessary to make system calls every time the lock is acquired.

If the ID of the current thread does not match the current thread when it acquires the lock (in the unlocked state), the biased lock will be revoked and the current thread will be biased again. If the number of times reaches the value of BiasedLockingBulkRebiasThreshold, by default 20 times, the biased lock of the current class will fail, and the impact will be the change of the epoch value. Add 1 to the epoch value of the locking class, and the subsequent lock object will return the epoch value of the copy class to the epoch mark bit in the figure. If the total number of undo reaches the value of BiasedLockingBulkRevokeThreshold (the default is 40), the biased lock of the current class is disabled, that is, the column to the right of the object header, and locking starts directly from the lightweight lock (lock upgraded).

Partial lock revocation is a troublesome process, which requires all threads to reach a safe point (STW occurs), traversing the thread stack of all threads to check whether the lock object is held, to avoid losing the lock, and the processing of epoch.

If there is multithreaded competition, the bias lock will be upgraded to a lightweight lock.

Lightweight lock

The scenario of lightweight lock processing is that different threads request locks (threads execute alternately) at the same time. Even if there are multiple threads competing for locks at the same time, the thread that acquires the lock has a very short time to hold the lock and releases the lock quickly.

When a thread adds a lock, judging that it is not a heavyweight lock, it will open up a space in the current thread stack as a lock record to copy the tag field of the lock object header (to make a record, because the value of the tag field of the lock object header will be replaced with the space address of the tag field just copied, just like the pointer to lock record part of the picture in the head of the object, as for the last two bits Because of memory alignment, it is 00). The address of the copied tag field is then set to the value of the tag bit of the lock object header based on the CAS operation, and if successful, the lock is acquired. If when adding a lock, it is judged that it is not a heavy lock, and the last two digits are not 01 (from the partial lock or no lock state), it means that there is already a thread holding it. If it is the current thread (which needs to be reentered), then set a 0. Here is a stack structure, just press a 0. When the lock is finally released, the stack is released, and the last element records the value of the original tag field of the lock object, and then sets it to the head of the lock object through CAS.

Note that when acquiring the lock, the cas fails, the current thread will spin for a certain number of times, upgrade to a heavy lock, and the current thread will block.

Weight lock

The heavyweight is what we usually call the synchronization lock, that is, the lock implementation of the java foundation, which requires system calls when acquiring and releasing the lock, which leads to context switching.

About spin lock

With regard to the spin lock, I have consulted the relevant information and there are two main explanations:

1. it is the failure of the lightweight lock competition, which will not immediately expand to heavyweight, but will spin a certain number of times to try to acquire the lock.

2. The failure of heavyweight lock competition will not block immediately, but also spin a certain number of times (a self-tuning algorithm is involved here).

For this explanation, you still have to look at the source code implementation of jvm to determine which is true:

Print parameters of the bias lock

As follows:

-XX:+UnlockDiagnosticVMOptions

-XX:+PrintBiasedLockingStatistics

I looped through the main method to get the same lock, and the print result is as follows:

Public static void main (String [] args) {int num = 0; for (int I = 0; I

< 1_000_000000; i++) { synchronized (lock) { num++; } } }

Synchronized principle analysis 1: synchronized principle analysis 1: object head

First, we need to know the layout of the object in memory:

Known objects are stored in heap memory, and objects can be roughly divided into three parts, namely, object headers, instance variables, and padding bytes.

The object header zhuyao is composed of MarkWord and Klass Point (type pointer), where Klass Point is the pointer of the object to its class metadata, and the virtual machine uses this pointer to determine which class the object is an instance of. MarkWord is used to store the runtime data of the object itself. If the object is an array object, the object header occupies 3 word widths (Word), and if the object is a non-array object, the object header occupies 2 word widths. (1word = 2 Byte = 16 bit).

The instance variable stores the property information of the object, including the property information of the parent class, aligned according to 4 bytes.

The padding character, because the virtual machine requires that the object byte be an integer multiple of 8 bytes, and the padding character is used to make up this integer multiple.

As you can see from the first part, Synchronized, whether it is a decorating method or a code block, synchronizes by holding locks on decorated objects, so where does the Synchronized lock object exist? The answer is that it exists in the MarkWord of the object header of the lock object. So what exactly does MarkWord look like in the object header, that is, what does it store?

In a 32-bit virtual machine:

In a 64-bit virtual machine:

The biased lock and lightweight lock in the above figure are introduced when the lock mechanism is optimized after java6. The lock upgrade section below will explain in detail. The Synchronized keyword corresponds to the heavy lock, and then explains the implementation of the heavyweight lock in Hotspot JVM.

The implementation principle of 2:Synchronized in JVM

The lock log bit corresponding to the heavyweight lock is 10, which stores the pointer to the heavyweight monitor lock. In Hotspot, the monitor lock object of the object is implemented by the ObjectMonitor object (C++), and its synchronization-related data structure is as follows:

ObjectMonitor () {_ count = 0; / used to record the number of times the object was acquired by the thread _ waiters = 0; _ recursions = 0; / / the number of reentrants of the lock _ owner = NULL; / / pointing to the thread holding the ObjectMonitor object _ WaitSet = NULL; / / threads in the wait state will be added to _ WaitSet _ WaitSetLock = 0; _ EntryList = NULL / / Threads in the waiting state of lock block will be added to the list}

Just looking at these data structures is still confused about the working mechanism of the monitor lock, so let's first take a look at the transition of several states in which the thread is acquiring the lock:

There are five states in the life cycle of a thread, start, running, waiting, blocking, and dead

For a synchronized-decorated method (code block):

When multiple threads access the method at the same time, they are first placed in the _ EntryList queue, where the thread is in the blocking state.

When a thread acquires the monitor lock of the instance object, it can enter the running state and execute the method. At this point, the _ owner of the ObjectMonitor object points to the current thread, and _ count plus 1 indicates that the current object lock is acquired by a thread.

When a thread in running state calls the wait () method, the current thread releases the monitor object and enters the waiting state, the _ owner of the ObjectMonitor object becomes null,_count minus 1, and the thread enters the _ WaitSet queue until a thread calls the notify () method to wake up the thread, then the thread reacquires the monitor object and enters the _ Owner area.

If the current thread finishes execution, then the monitor object is also released into the waiting state, and the _ owner of the ObjectMonitor object becomes null,_count minus 1.

So how do Synchronized-decorated code blocks / methods get monitor objects?

In the JVM specification, we can see that both method synchronization and code block synchronization are based on entering and exiting monitor objects, but there are great differences in their implementation. The decompiled code can be obtained by decompiling the class bytecode file by javap.

(1) Synchronized modification code block:

Synchronized code block synchronization inserts monitorentry instructions at the beginning of the code block that needs to be synchronized, and monitorexit instructions at the end of synchronization or where exceptions occur; JVM ensures that monitorentry and monitorexit appear in pairs, any object has a monitor corresponding to it, and when the object's monitor is held, it will be locked.

For example, the synchronization code block is as follows:

Public class SyncCodeBlock {public int i; public void syncTask () {synchronized (this) {ionization;}

The class bytecode file compiled by the synchronous code block is decompiled, and the result is as follows (only the decompilation of the method part is retained):

Public void syncTask () Descriptor: () V flags: ACC_PUBLIC Code: stack=3, locals=3, args_size=1 0: aload_0 1: dup 2: astore_1 3: monitorenter / / Note here Enter the synchronization method 4: aload_0 5: dup 6: getfield # 2 / / Field iVO I 9: iconst_1 10: iadd 11: putfield # 2 / / Field iRule I 14: aload_1 15: monitorexit / / Note here Exit synchronization method 16: goto 24 19: astore_2 20: aload_1 21: monitorexit / Note here, exit synchronization method 22: aload_2 23: athrow 24: return Exception table: / / omit other bytecodes.

You can see that the synchronization method block inserts monitorentry statements when entering the code block and inserts monitorexit statements when exiting the code block. In order to ensure that both normal execution (line 15) and exception jump out of the code block (line 21) can execute monitorexit statements, two monitorexit statements will appear.

(2) Synchronized modification method:

Synchronized method synchronization is no longer achieved by inserting monitorentry and monitorexit instructions, but by method call instructions to read the ACC_SYNCHRONIZED flag in the runtime constant pool implicitly. If the ACC_SYNCHRONIZED flag in the method table structure (method_info Structure) is set, then the thread will get the object's monitor object before executing the method, execute the method code if it is successful, and release the monitor object after execution. If the monitor object has been acquired by another thread, the current thread is blocked.

The synchronization method code is as follows:

Public class SyncMethod {public int i; public synchronized void syncTask () {ionization;}}

Decompile the compiled class bytecode of the synchronous method, and the result is as follows (only the decompilation of the method part is retained):

Public synchronized void syncTask () Descriptor: () V / / method identification ACC_PUBLIC represents public modification ACC_SYNCHRONIZED indicates that this method is the synchronization method flags: ACC_PUBLIC, ACC_SYNCHRONIZED Code: stack=3, locals=1 Args_size=1 0: aload_0 1: dup 2: getfield # 2 / / Field i:I 5: iconst_1 6: iadd 7: putfield # 2 / / Field i:I 10: return LineNumberTable: line 12: 0 line 13: 10}

You can see that there are no monitorentry and monitorexit instructions at the beginning and end of the method, but the ACC_SYNCHRONIZED flag bit appears.

Third, lock optimization 1. Lock upgrade

Four states of the lock: no lock state, bias lock state, lightweight lock state, heavy lock state (from low to high level)

(1) bias lock:

Why introduce biased locks?

Because after a large number of studies by the authors of HotSpot, it is found that there is no lock competition most of the time, and often a thread acquires the same lock many times, so if you have to compete for the lock every time, it will increase the unnecessary cost. In order to reduce the cost of acquiring the lock, the biased lock is introduced.

Upgrade of biased locks:

When thread 1 accesses the code block and acquires the lock object, the threadID of the biased lock is recorded in the java object header and stack frame, because the bias lock does not release the lock actively, so when thread 1 acquires the lock again later, it needs to compare whether the threadID in the threadID and Java object headers of the current thread are consistent. If they are consistent (or thread 1 acquires the lock object), there is no need to use CAS to lock and unlock. If it is inconsistent (other threads, such as thread 2, compete for lock objects, and biased locks do not actively release the threadID of thread 1, which is still stored), then you need to check whether thread 1 recorded in the Java object header survives, and if not, the lock object is reset to a lock-free state, and other threads (thread 2) can compete to set it to biased lock If alive, immediately look for the stack frame information of the thread (thread 1). If you still need to hold the lock object, pause the current thread 1, undo the bias lock, and upgrade to a lightweight lock. if thread 1 no longer uses the lock object, then set the lock object state to unlocked and re-favor the new thread.

Cancellation of biased locks:

Bias locks are turned on by default, and the start time is generally a few seconds slower than the application startup. If you don't want to have this delay, you can use-XX:BiasedLockingStartUpDelay=0

If you don't want a partial lock, you can set it with-XX:-UseBiasedLocking = false

(2) lightweight lock

Why introduce lightweight locks?

Lightweight locks consider situations where there are not many threads competing for the lock object, and the thread does not hold the lock for long. Because blocking threads require CPU to move from user mode to kernel mode, which is costly, if the lock is released soon after blocking, the cost will outweigh the gain, so at this time, it simply does not block the thread and lets it spin and wait for the lock to be released.

When will a lightweight lock be upgraded to a heavyweight lock?

When thread 1 acquires a lightweight lock, it first copies a copy of the object header MarkWord of the lock object to the space created in the stack frame of thread 1 for storing lock records (called DisplacedMarkWord), and then uses CAS to replace the contents of the object header with the address of the lock record (DisplacedMarkWord) stored by thread 1.

If thread 1 copies the object header at the same time (before thread 1CAS), thread 2 is also ready to acquire the lock, copying the object header into thread 2's lock record space, but when thread 2CAS finds that thread 1 has changed the object header and thread 2's CAS fails, then thread 2 tries to use a spin lock to wait for thread 1 to release the lock.

However, if the spin time is too long, because the spin consumes CPU, so the number of spins is limited, such as 10 or 100. if the number of spins reaches thread 1 without releasing the lock, or thread 1 is still executing and thread 2 is still spinning, and another thread 3 is competing for the lock object, then the lightweight lock will expand to a heavy lock. Heavyweight locks are blocked except for the thread that owns the lock, preventing CPU from idling.

Note: to avoid useless spins, once a lightweight lock expands to a heavyweight lock, it will no longer be downgraded to a lightweight lock; a skewed lock can no longer be downgraded to a biased lock. In a word, locks can be upgraded and not degraded, but the biased lock state can be reset to an unlocked state.

(3) the advantages and disadvantages of these locks (bias lock, lightweight lock, heavyweight lock)

2. Lock coarsening

In theory, the scope of the synchronization block should be as small as possible, synchronizing only in the actual scope of the shared data, in order to minimize the number of operations that need to be synchronized and shorten the blocking time. If there is lock competition, the thread waiting for the lock can also get the lock as soon as possible.

However, locking and unlocking also consumes resources, if there are a series of continuous locking and unlocking operations, it may lead to unnecessary performance loss.

Lock coarsening is to connect multiple continuous locking and unlocking operations together and expand into a larger range of locks to avoid frequent locking and unlocking operations.

3. Lock elimination

When Java virtual machine compiles JIT (which can be simply understood as compiling when a piece of code is about to be executed for the first time, it is also called instant compilation), through scanning the running context, after escape analysis, it removes locks that are impossible to compete for shared resources, and eliminates unnecessary locks in this way, which can save meaningless request locking time.

The above is all the contents of the article "sample Analysis of synchronized Lock upgrade in java". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report