Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is lock optimization and CAS

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly explains "what is lock optimization and CAS". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is lock optimization and CAS".

Object head

Each object in JVM has an object header, which is used to store the system information of the object. The object header structure of 64bit JVM is shown in the following figure:

Where:

Mark Word consists of 64bit, a functional data area that can store information such as object hash, object age, lock pointer, etc.

KClass Word is composed of 64bit without pointer compression enabled, but 64bit JVM will enable pointer compression (+ UseCompressedOops) by default, so it will be compressed to 32bit.

In addition, you can see from the figure that different locks correspond to different Mark Word:

No lock: 25bit empty + 31bit hash + 1bit empty + 4bit generation age + whether 1bit is biased towards lock + 2bit lock mark

Biased lock: the thread of 54bit holding biased lock ID+2bit biased timestamp + 1 bit null + 4bit generation age + 1bit whether biased lock + 2bit lock flag

Lightweight lock: lock record pointer + 2bit lock mark in 62bit stack

Weight lock: 62bit heavyweight lock pointer + 2bit lock tag

How JVM distinguishes locks mainly depends on two fields: biased_lock and lock. The corresponding relationship is as follows:

Biased_lock=0 lock=00: lightweight lock

Biased_lock=0 lock=01: no lock

Biased_lock=0 lock=10: heavyweight lock

Biased_lock=0 lock=11:GC marker

Biased_lock=1 lock=01: biased lock

Runtime optimization of locks

In many cases, JVM will optimize thread contention operations at the JVM level to solve contention problems as much as possible, and will also try to eliminate unnecessary contention, including:

Bias lock

Lightweight lock

Weight lock

Spin lock

Lock elimination

Introduction to biased locks (JDK15 is off by default)

Biased lock is a lock optimization method proposed by JDK 1.6. the core idea is that if there is no competition, the thread synchronization operation that has acquired the lock is canceled, that is, after a thread acquires the lock, the lock will enter the biased mode, and when the thread requests the lock again, there is no need for the relevant synchronization operation, so as to save operation time. In the meantime, if another thread makes a lock request, the lock exits biased mode.

The parameter to turn on the biased lock is-XX:+UseBiasedLocking. When the lock is biased, Mark Word records the thread that acquired the lock (54bit). This information can be used to determine whether the current thread holds the biased lock.

Note that bias locks are turned off by default and related options are disabled after JDK15. You can refer to JDK-8231264.

Locking process

The locking process for biased locks is as follows:

Step 1: access whether the biased_lock in Mark Word is set to 1, whether the lock is set to 01, confirm that it is biased, if biased_lock is 0, it is lock-free, and compete for locks directly through CAS. If it fails, perform the fourth step.

Step 2: if the state is biased, test whether the thread ID points to the current thread, and if so, reach step 5, otherwise reach step 3

Step 3: if the thread ID does not point to the current thread, compete for the lock through the CAS operation. If successful, set the thread ID in Mark Word to the current thread ID, and then perform the fifth step. If it fails, perform the fourth step.

Step 4: if the CAS fails to acquire the bias lock, it indicates that there is competition, and the lock is revoked.

Step 5: execute the synchronization code

Examples

Here is a simple example:

Public class Main {private static List list = new Vector (); public static void main (String [] args) {long start = System.nanoTime (); for (int I = 0; I

< 1_0000_0000; i++) { list.add(i); } long end = System.nanoTime(); System.out.println(end-start); }} Vector的add是一个synchronized方法,使用如下参数测试: -XX:BiasedLockingStartupDelay=0 # 偏向锁启动时间,设置为0表示立即启动-XX:+UseBiasedLocking # 开启偏向锁 输出如下: 1664109780 而将偏向锁关闭: -XX:BiasedLockingStartupDelay=0-XX:-UseBiasedLocking 输出如下: 2505048191 可以看到偏向锁还是对系统性能有一定帮助的,但是需要注意偏向锁在锁竞争激烈的场合没有太强的优化效果,因为大量的竞争会导致持有锁的线程不停地切换,锁很难一直保持在偏向模式,这样不仅仅不能优化性能,反而因为频繁切换而导致性能下降,因此竞争激烈的场合可以尝试使用-XX:-UseBiasedLocking禁用偏向锁。 轻量级锁简介 如果偏向锁失败,那么JVM会让线程申请轻量级锁。轻量级锁在内部使用一个BasicObjectLock的对象实现,该对象内部由: 一个BasicLock对象 一个持有该锁的Java对象指针 组成。BasicObjectLock对象放置在Java栈的栈帧中,在BasicLock对象还会维护一个叫displaced_header的字段,用于备份对象头部的Mark Word。 加锁流程 第一步:通过Mark Word判断是否无锁(biased_lock是否为0且lock为01),如果是无锁,会创建一个叫锁记录(Lock Record)的空间,用于存储当前Mark Word的拷贝 第二步:将对象头的Mark Word复制到锁记录中 第三步:拷贝成功后,使用CAS操作尝试将锁对象Mark Word更新为指向锁记录的指针,并将线程栈帧中的锁记录的owner指向Object的Mark Word 第四步:如果操作成功,那么就成功拥有了锁 第五步:如果操作失败,JVM会检查Mark Word是否指向当前线程的栈帧,如果是就说明当前线程已经拥有了这个对象的锁,就可以直接进入同步块继续执行,否则会让当前线程尝试自旋获取锁,自旋到达一定次数后如果还没有获得锁,那么会膨胀为重量级锁 重量级锁简介 当轻量级锁自旋一定次数后还是无法获取锁,就会膨胀为重量级锁。相比起轻量级锁,Mak Word存放的是指向锁记录的指针,重量级锁中的Mark Word存放的是指向Object Monitor的指针,如下图所示:

Because lock records are private to threads and can not be accessed by multiple threads, ObjectMonitor, which can be shared by threads, is introduced into heavy-level locks.

Locking process

When you try to lock for the first time, CAS first tries to modify the _ owner field of ObjectMonitor. The result is as follows:

The first one: the lock is not occupied by other threads, and the lock is acquired successfully.

Second: if the lock is occupied by another thread, the current thread reenters the lock and obtains success.

Third: the lock is occupied by the lock record, which is private to the thread, that is, belongs to the current thread, so it belongs to reentry, and the number of reentrants is 1.

Fourth: if you are not satisfied, try to lock again (call EnterI ())

The process of trying to lock again is a loop that keeps trying to acquire the lock until it succeeds. The process is summarized as follows:

Multiple attempts to acquire the lock

Get the failure and put the thread in the blocking queue after wrapping it

Try to acquire the lock again

Hang yourself after failure

Continue to try to acquire the lock after being awakened

If you succeed, exit the loop, otherwise continue.

Spin lock

A spin lock allows a thread not to be suspended when it does not acquire a lock, but to execute an empty loop (so-called spin). If the lock can be acquired after several empty loops, it continues to execute, if not, suspending the current thread.

After using spin lock, the probability of thread being suspended is relatively reduced, and the coherence of thread execution is relatively enhanced, so it has a certain positive significance for the concurrent threads whose lock competition is not very fierce and the lock occupies a short concurrent time, but, for the concurrent threads with fierce competition and long lock time, they still can not acquire the lock after spin waiting, or will be suspended, wasting spin time.

The-XX:+UseSpinning parameter is provided in JDK1.6 to open the spin lock, but after JDK1.7, the spin lock parameter is cancelled, JVM no longer supports user-configured spin lock, spin lock is always executed, and the number of times is adjusted by JVM.

Brief introduction of Lock Elimination

Lock elimination is to remove unnecessary locks, for example, to use thread-safe classes such as StringBuffer in some single-threaded environments, so that these unnecessary locks can be eliminated based on escape analysis techniques, thus improving performance.

Example public class Main {private static final int CIRCLE = 20000000; public static void main (String [] args) {long start = System.nanoTime (); for (int I = 0; I < CIRCLE; iTunes +) {createStringBuffer ("Test", String.valueOf (I));} long end = System.nanoTime (); System.out.println (end-start) } private static String createStringBuffer (String S1 and string S2) {StringBuffer sb = new StringBuffer (); sb.append (S1); sb.append (S2); return sb.toString ();}}

Parameters:

-XX:+DoEscapeAnalysis-XX:-EliminateLocks-Xcomp-XX:-BackgroundCompilation-XX:BiasedLockingStartupDelay=0

Output:

260642198

After the unlock is removed:

-XX:+DoEscapeAnalysis-XX:+EliminateLocks-Xcomp-XX:-BackgroundCompilation-XX:BiasedLockingStartupDelay=0

The output is as follows:

253101105

It can be seen that there is still a certain performance improvement, but the improvement is not great.

Application layer Optimization of Lock

The application layer optimization of locks is to optimize locks at the code level, including:

Reduce holding time

Reduce granularity

Lock separation

Lock coarsening

Reduce holding time

Reducing lock holding time is to reduce the occupancy time of a lock as much as possible to reduce thread mutex time, such as:

Public synchronized void method () {A (); B (); C ();}

If only B () is a synchronization operation, it can be optimized to synchronize when necessary, that is, when B () is executed:

Public void method () {A (); synchronized (this) {B ();} C ();} reduce granularity

The so-called reducing lock granularity means to reduce the scope of locked objects, so as to reduce the possibility of lock conflicts, and then improve the concurrency ability of the system.

Reducing granularity is also an effective way to weaken multithreaded competition, such as ConcurrentHashMap, which is typical. Segment in JDK1.7 is a good example. Only a specific segment is locked for each concurrent operation, thus improving concurrency performance.

Lock separation

Lock separation is to divide an exclusive lock into multiple locks, such as LinkedBlockingQueue. In the take () and put () operations, instead of using the same lock, it is separated into a takeLock and a putLock:

Private final ReentrantLock takeLock;private final ReentrantLock putLock

The initialization is as follows:

This.takeLock = new ReentrantLock (); this.notEmpty = this.takeLock.newCondition (); this.putLock = new ReentrantLock ()

The take () and put () operations are as follows:

Public E take () throws InterruptedException {takeLock.lockInterruptibly (); / / cannot have two threads take / /... Try {/ /...} finally {takeLock.unlock ();} /...} public void put (E) throws InterruptedException {/ /. PutLock.lockInterruptibly (); / / cannot put try two threads at the same time {/ /...} finally {putLock.unlock ();} / /.}

You can see that the real separation of fetching data from writing data is achieved through putLock and takeLock locks.

Lock coarsening

In general, in order to ensure the effective concurrency of multiple threads, each thread is required to hold the lock for as short a time as possible, but if it keeps requesting the same lock, it will consume resources and is not conducive to performance optimization. when you encounter a series of continuous requests and releases of the same lock, all lock operations will be integrated into one request for the lock, reducing the number of requests for the lock. This process is called lock coarsening, such as

Public void method () {synchronized (lock) {A ();} synchronized (lock) {B ();}}

Will be integrated into the following form:

Public void method () {synchronized (lock) {A (); B ();}}

Apply for a lock in the loop, such as:

For (int iTuno Bandi)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report