In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "synchronized lock upgrade example analysis", interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn the "synchronized lock upgrade case analysis"!
1 unlocked
Jvm will have a delay of 4 seconds towards the opening of the lock, during which the object is in an unlocked state. If you turn off a lock that is biased toward a lock startup delay, or if a lock has elapsed for 4 seconds and there is no thread competing for the object, the object will enter a lock-free biased state.
To be exact, the unlocked biased state should be called the anonymous biased (Anonymously biased) state, because the last three bits in the object's mark word are already 101, but the threadId pointer is still all zero, and it is not biased toward any thread. To sum up, when an object is first created, the configuration object based on jvm may be unlocked or anonymously biased.
In addition, if the biased lock is turned off in the parameter of jvm, it will remain unbiased until a thread acquires the lock object. Modify the jvm startup parameters:
-XX:-UseBiasedLocking
Print object memory layout after a delay of 5 seconds:
Public static void main (String [] args) throws InterruptedException {User user=new User (); TimeUnit.SECONDS.sleep (5); System.out.println (ClassLayout.parseInstance (user). ToPrintable ());}
As you can see, even after a certain startup delay, the object has been in a 001 unlocked and unbiased state. You may wonder why there is an unbiased state when there is no lock. The explanation obtained by consulting the data is as follows:
Synchronized is also used in many parts of the code within JVM. It is clear that there is thread competition in these places, and if you need to upgrade from the biased state gradually, it will bring additional performance loss, so JVM sets a lock-biased startup delay to reduce the performance loss.
That is, in an unbiased state, if a thread tries to acquire a lock, the process of upgrading the biased lock will be skipped and lightweight locks will be used directly. Use the code for validation:
/ /-XX:-UseBiasedLockingpublic static void main (String [] args) throws InterruptedException {User user=new User (); synchronized (user) {System.out.println (ClassLayout.parseInstance (user). ToPrintable ());}}
Looking at the results, you can see that when using synchronized with the bias lock turned off, the lock is directly upgraded to a lightweight lock (00 state)
An additional note is that in the anonymous bias state, if you call the system's hashCode () method, it will return the object to an unlocked state and write hashCode in markword. And in this state, if a thread tries to acquire a lock, it will upgrade directly from no lock to a lightweight lock, not to a biased lock.
2 bias lock 2.1 bias lock principle
The anonymous bias state is the initial state of the biased lock, in which the first thread trying to acquire the lock of the object will use the CAS operation (assembly command CMPXCHG) to attempt to write its own threadID to the mark word of the object header, upgrading the anonymous bias state to the biased lock state that has been biased. In the biased state, the thread pointer threadID is not null, and the timestamp epoch of the biased lock is a valid value.
If a thread tries to acquire the lock again, you need to check whether the threadID stored in the mark word is the same as yourself. if the same, it means that the current thread has acquired the lock of the object, and there is no need to use the CAS operation to add the lock.
If the threadID stored in the mark word is different from the current thread, a CAS operation is performed in an attempt to replace the threadID in the mark word with the ID of the current thread. Execution can be successful only if the object is in the following two states:
The object is in an anonymous bias state
The object is in a Rebiasable state, and the new thread can use CAS to point the threadID to itself
If the object is not in the above two states, there is thread contention in the lock, and the partial lock revocation operation will be performed after the CAS replacement fails. The revocation of the biased lock needs to wait for the global security point Safe Point (the safe point is the security state set by jvm to ensure that the reference relationship will not change during garbage collection, where all threads are suspended). At this safe point, the thread that acquired the biased lock will be suspended.
After a thread is paused, the state of the thread holding the biased lock is checked for survival by traversing all threads in the current jvm:
If the thread is still alive and the thread is executing the code in the synchronous code block, upgrade to a lightweight lock
If the thread holding the biased lock does not survive, or if the thread holding the biased lock is not executing the code in the synchronous code block, verify that rebias is allowed:
If heavy bias is not allowed, remove the bias lock, upgrade mark word to lightweight lock, and perform CAS competitive lock.
Allow heavy bias, set to anonymous bias lock state, and CAS redirects the bias lock to the new thread
After completing the above operation, wake up the paused thread and continue to execute the code from the safe point.
2.2 bias lock upgrade
In the above process, we already know that the anonymous bias state can become unlocked or upgraded to a biased lock, so let's take a look at the changes in other states that favor the lock.
Upgrade bias lock to lightweight lock
Public static void main (String [] args) throws InterruptedException {User user=new User (); synchronized (user) {System.out.println (ClassLayout.parseInstance (user). ToPrintable ());} Thread thread = new Thread (()-> {synchronized (user) {System.out.println ("- THREAD--:" + ClassLayout.parseInstance (user). ToPrintable ();}}); thread.start () Thread.join (); System.out.println ("- END--:" + ClassLayout.parseInstance (user). ToPrintable ());}
Check the memory layout, upgrade the bias lock to lightweight lock, release the lock after the execution of the synchronization code, and become unbiased.
Upgrade bias lock to heavyweight lock
Public static void main (String [] args) throws InterruptedException {User user=new User (); Thread thread = new Thread (()-> {synchronized (user) {System.out.println ("- THREAD1--:" + ClassLayout.parseInstance (user). ToPrintable ()); try {user.wait (2000);} catch (InterruptedException e) {e.printStackTrace () } System.out.println ("- THREAD END--:" + ClassLayout.parseInstance (user). ToPrintable ();}}); thread.start (); thread.join (); TimeUnit.SECONDS.sleep (3); System.out.println (ClassLayout.parseInstance (user). ToPrintable ());}
Looking at the memory layout, you can see that after calling the object's wait () method, it is upgraded from a bias lock to a weight lock, and becomes unlocked after the lock is released.
This is because the call to the wait () method relies on the monitor associated with the object in the heavyweight lock, and the monitor changes the thread to a WAITING state after calling the wait () method, so it is forced to upgrade to a heavy lock. In addition, calling the hashCode method directly escalates the bias lock to a heavy lock.
2.3 batch heavy bias
Without disabling biased locks, when a thread establishes a large number of objects and unlocks them after performing synchronization operations, all objects are in a biased lock state. If another thread also tries to acquire locks on these objects, it will lead to the Bulk Rebias of the lock. When batch rebiasing is triggered, the lock object after the first thread ends the synchronization operation is reset to a rebiased state when accessed synchronously, so as to allow fast rebiasing, which reduces the performance consumption of undoing biased locks and then upgrading to lightweight locks.
First, take a look at the parameters related to the bias lock, modify the jvm startup parameters, and use the following command to print the default parameter values of jvm at the start of the project:
-XX:+PrintFlagsFinal
There are three attributes to pay attention to:
BiasedLockingBulkRebiasThreshold: bias lock batch heavy bias threshold, default is 20 times
BiasedLockingBulkRevokeThreshold: tends to lock batch revocation threshold. Default is 40 times.
BiasedLockingDecayTime: the delay time for resetting the count. The default is 25000 milliseconds (that is, 25 seconds).
Batch rebias is measured in class rather than objects, and each class maintains a lock-biased undo counter, which is incremented whenever a lock-biased undo occurs on the class's object. When this value reaches the default threshold of 20:00, jvm will assume that the lock object is no longer suitable for the original thread, so batch rebias occurs. Within 25 seconds from the last batch bias, if the undo count reaches 40, the batch undo will occur, and if it exceeds 25 seconds, the count within [20,40) will be reset.
It doesn't matter whether the above theory sounds a little difficult to understand, let's first verify the process of batch bias with code:
Private static Thread T1 static void main (String [] args) throws InterruptedException {TimeUnit.SECONDS.sleep (5); List list = new ArrayList (); for (int I = 0; I)
< 40; i++) { list.add(new Object()); } t1 = new Thread(() ->{for (int I = 0; I)
< list.size(); i++) { synchronized (list.get(i)) { } } LockSupport.unpark(t2); }); t2 = new Thread(() ->{LockSupport.park (); for (int I = 0; I
< 30; i++) { Object o = list.get(i); synchronized (o) { if (i == 18 || i == 19) { System.out.println("THREAD-2 Object"+(i+1)+":"+ClassLayout.parseInstance(o).toPrintable()); } } } }); t1.start(); t2.start(); t2.join(); TimeUnit.SECONDS.sleep(3); System.out.println("Object19:"+ClassLayout.parseInstance(list.get(18)).toPrintable()); System.out.println("Object20:"+ClassLayout.parseInstance(list.get(19)).toPrintable()); System.out.println("Object30:"+ClassLayout.parseInstance(list.get(29)).toPrintable()); System.out.println("Object31:"+ClassLayout.parseInstance(list.get(30)).toPrintable());} 分析上面的代码,当线程t1运行结束后,数组中所有对象的锁都偏向t1,然后t1唤醒被挂起的线程t2,线程t2尝试获取前30个对象的锁。我们打印线程t2获取到的第19和第20个对象的锁状态: 线程t2在访问前19个对象时对象的偏向锁会升级到轻量级锁,在访问后11个对象(下标19-29)时,因为偏向锁撤销次数达到了20,会触发批量重偏向,将锁的状态变为偏向线程t2。在全部线程结束后,再次查看第19、20、30、31个对象锁的状态: 线程t2结束后,第1-19的对象释放轻量级锁变为无锁不可偏向状态,第20-30的对象状态为偏向锁、但从偏向t1改为偏向t2,第31-40的对象因为没有被线程t2访问所以保持偏向线程t1不变。 2.4 批量撤销 在多线程竞争激烈的状况下,使用偏向锁将会导致性能降低,因此产生了批量撤销机制,接下来使用代码进行测试: private static Thread t1, t2, t3;public static void main(String[] args) throws InterruptedException { TimeUnit.SECONDS.sleep(5); List list = new ArrayList(); for (int i = 0; i < 40; i++) { list.add(new Object()); } t1 = new Thread(() ->{for (int I = 0; I)
< list.size(); i++) { synchronized (list.get(i)) { } } LockSupport.unpark(t2); }); t2 = new Thread(() ->{LockSupport.park (); for (int I = 0; I
< list.size(); i++) { Object o = list.get(i); synchronized (o) { if (i == 18 || i == 19) { System.out.println("THREAD-2 Object"+(i+1)+":"+ClassLayout.parseInstance(o).toPrintable()); } } } LockSupport.unpark(t3); }); t3 = new Thread(() ->{LockSupport.park (); for (int I = 0; I
< list.size(); i++) { Object o = list.get(i); synchronized (o) { System.out.println("THREAD-3 Object"+(i+1)+":"+ClassLayout.parseInstance(o).toPrintable()); } } }); t1.start(); t2.start(); t3.start(); t3.join(); System.out.println("New: "+ClassLayout.parseInstance(new Object()).toPrintable());} 对上面的运行流程进行分析: 线程t1中,第1-40的锁对象状态变为偏向锁 线程t2中,第1-19的锁对象撤销偏向锁升级为轻量级锁,然后对第20-40的对象进行批量重偏向 线程t3中,首先直接对第1-19个对象竞争轻量级锁,而从第20个对象开始往后的对象不会再次进行批量重偏向,因此第20-39的对象进行偏向锁撤销升级为轻量级锁,这时t2和t3线程一共执行了40次的锁撤销,触发锁的批量撤销机制,对偏向锁进行撤销置为轻量级锁 看一下在3个线程都结束后创建的新对象: 可以看到,创建的新对象为无锁不可偏向状态001,说明当类触发了批量撤销机制后,jvm 会禁用该类创建对象时的可偏向性,该类新创建的对象全部为无锁不可偏向状态。 2.5 总结 偏向锁通过消除资源无竞争情况下的同步原语,提高了程序在单线程下访问同步资源的运行性能,但是当出现多个线程竞争时,就会撤销偏向锁、升级为轻量级锁。 如果我们的应用系统是高并发、并且代码中同步资源一直是被多线程访问的,那么撤销偏向锁这一步就显得多余,偏向锁撤销时进入Safe Point产生STW的现象应该是被极力避免的,这时应该通过禁用偏向锁来减少性能上的损耗。 3 轻量级锁3.1 轻量级锁原理 1、在代码访问同步资源时,如果锁对象处于无锁不可偏向状态,jvm首先将在当前线程的栈帧中创建一条锁记录(lock record),用于存放: displaced mark word(置换标记字):存放锁对象当前的mark word的拷贝 owner指针:指向当前的锁对象的指针,在拷贝mark word阶段暂时不会处理它 2、在拷贝mark word完成后,首先会挂起线程,jvm使用CAS操作尝试将对象的 mark word 中的 lock record 指针指向栈帧中的锁记录,并将锁记录中的owner指针指向锁对象的mark word 如果CAS替换成功,表示竞争锁对象成功,则将锁标志位设置成 00,表示对象处于轻量级锁状态,执行同步代码中的操作 如果CAS替换失败,则判断当前对象的mark word是否指向当前线程的栈帧: 如果是则表示当前线程已经持有对象的锁,执行的是synchronized的锁重入过程,可以直接执行同步代码块 否则说明该其他线程已经持有了该对象的锁,如果在自旋一定次数后仍未获得锁,那么轻量级锁需要升级为重量级锁,将锁标志位变成10,后面等待的线程将会进入阻塞状态 4、轻量级锁的释放同样使用了CAS操作,尝试将displaced mark word替换回mark word,这时需要检查锁对象的mark word中lock record指针是否指向当前线程的锁记录: 如果替换成功,则表示没有竞争发生,整个同步过程就完成了 如果替换失败,则表示当前锁资源存在竞争,有可能其他线程在这段时间里尝试过获取锁失败,导致自身被挂起,并修改了锁对象的mark word升级为重量级锁,最后在执行重量级锁的解锁流程后唤醒被挂起的线程 3.2 轻量级锁重入 我们知道,synchronized是可以锁重入的,在轻量级锁的情况下重入也是依赖于栈上的lock record完成的。以下面的代码中3次锁重入为例: synchronized (user){ synchronized (user){ synchronized (user){ //TODO } }} 轻量级锁的每次重入,都会在栈中生成一个lock record,但是保存的数据不同: 首次分配的lock record,displaced mark word复制了锁对象的mark word,owner指针指向锁对象 之后重入时在栈中分配的lock record中的displaced mark word为null,只存储了指向对象的owner指针 轻量级锁中,重入的次数等于该锁对象在栈帧中lock record的数量,这个数量隐式地充当了锁重入机制的计数器。这里需要计数的原因是每次解锁都需要对应一次加锁,只有最后解锁次数等于加锁次数时,锁对象才会被真正释放。在释放锁的过程中,如果是重入则删除栈中的lock record,直到没有重入时则使用CAS替换锁对象的mark word。 3.3 轻量级锁升级 在jdk1.6以前,默认轻量级锁自旋次数是10次,如果超过这个次数或自旋线程数超过CPU核数的一半,就会升级为重量级锁。这时因为如果自旋次数过多,或过多线程进入自旋,会导致消耗过多cpu资源,重量级锁情况下线程进入等待队列可以降低cpu资源的消耗。自旋次数的值也可以通过jvm参数进行修改: -XX:PreBlockSpin jdk1.6以后加入了自适应自旋锁 (Adapative Self Spinning),自旋的次数不再固定,由jvm自己控制,由前一次在同一个锁上的自旋时间及锁的拥有者的状态来决定: 对于某个锁对象,如果自旋等待刚刚成功获得过锁,并且持有锁的线程正在运行中,那么虚拟机就会认为这次自旋也是很有可能再次成功,进而允许自旋等待持续相对更长时间 对于某个锁对象,如果自旋很少成功获得过锁,那在以后尝试获取这个锁时将可能省略掉自旋过程,直接阻塞线程,避免浪费处理器资源。 下面通过代码验证轻量级锁升级为重量级锁的过程: public static void main(String[] args) throws InterruptedException { User user = new User(); System.out.println("--MAIN--:" + ClassLayout.parseInstance(user).toPrintable()); Thread thread1 = new Thread(() ->{synchronized (user) {System.out.println ("- THREAD1--:" + ClassLayout.parseInstance (user). ToPrintable ()); try {TimeUnit.SECONDS.sleep (5);} catch (InterruptedException e) {e.printStackTrace ();}) Thread thread2 = new Thread (()-> {try {TimeUnit.SECONDS.sleep (2);} catch (InterruptedException e) {e.printStackTrace ();} synchronized (user) {System.out.println ("- THREAD2--:" + ClassLayout.parseInstance (user). ToPrintable ();}}); thread1.start () Thread2.start (); thread1.join (); thread2.join (); TimeUnit.SECONDS.sleep (3); System.out.println (ClassLayout.parseInstance (user). ToPrintable ());}
In the above code, thread 2 sleeps for two seconds after startup and then attempts to acquire the lock, ensuring that thread 1 can get the lock first, which results in resource competition for the lock object. View the changes in the object lock state:
When thread 1 holds a lightweight lock, thread 2 attempts to acquire the lock, resulting in resource competition and upgrading the lightweight lock to a heavyweight lock. After both threads are running, you can see that the state of the object returns to a lock-free and unbiased state, and the next time the thread tries to acquire a lock, it starts directly from the lightweight lock state.
The reason why the main thread is dormant for 3 seconds before the last print is that the lock release process takes a certain amount of time, and if the object memory layout is printed directly after thread execution is completed, the object may still be in a heavyweight lock state.
3.4 Summary
Lightweight locks are similar to biased locks in that they are optimized by jdk for multithreading, except that lightweight locks avoid costly mutex operations through CAS, while biased locks completely eliminate synchronization without resource competition.
Lightweight locks are "lightweight" compared to heavyweight locks, and their performance is slightly better. Lightweight locks try to use CAS to remedy before upgrading to heavy locks, in order to reduce the mutex of multiple threads. When multiple threads execute synchronous blocks alternately, jvm uses lightweight locks to ensure synchronization, avoids the overhead of thread switching, and does not cause switching between user mode and kernel mode. However, excessive spin will lead to a waste of cpu resources, in which case lightweight locks may consume more resources.
4 heavyweight lock 4.1 Monitor
The heavyweight lock is realized by relying on the monitor (monitor / pipe program) inside the object, and the monitor depends on the Mutex Lock (mutex lock) implementation at the bottom of the operating system, which is why the heavyweight lock is relatively "heavy". When the operating system switches between threads, it needs to switch from the user mode to the kernel mode, which is very expensive. Before you learn how heavyweight locks work, you need to understand the core concepts in monitor:
Owner: identifies the thread that owns the monitor, both initially and after the lock is released as null
Cxq (ConnectionList): contention queue, where all threads contending for locks are placed first.
EntryList: list of candidates to which threads from the cxq queue are moved when owner is unlocked
OnDeck: when a thread is moved from cxq to EntryList, a thread is specified as a Ready state (that is, OnDeck), indicating that it can compete for locks, called an owner thread if the competition succeeds, and put back into the EntryList if it fails
WaitSet: threads blocked by calls to the wait () or wait (time) methods will be placed in this queue
Count:monitor counter. The value plus 1 indicates that the lock of the current object is acquired by a thread, and the thread releases the monitor object by minus 1.
Recursions: number of thread reentrants
When the thread calls the wait () method, it releases the currently held monitor, sets the owner to null, and enters the WaitSet collection waiting to be woken up. When a thread calls the notify () or notifyAll () method, the held monitor is also released and the thread of WaitSet is awakened to compete with monitor again.
4.2 heavyweight lock principle
When upgraded to a heavyweight lock, the pointer in the mark word of the lock object no longer points to the lock record in the thread stack, but to the monitor object associated with the lock object in the heap. When multiple threads access synchronous code at the same time, they first attempt to take ownership of the monitor corresponding to the current lock object:
Get success, determine whether the current thread is reentrant, and if so, then recursions+1
If the acquisition fails, the current thread will be blocked, wake up after waiting for other threads to unlock, and compete for the lock object again
In the case of heavy lock, the process of adding and unlocking involves the mutually exclusive operation of the Mutex Lock of the operating system, and the scheduling between threads and the state change process of threads need to switch between the user mode and the core state of mind, which will lead to the consumption of a lot of cpu resources and lead to performance degradation.
At this point, I believe that everyone on the "synchronized lock upgrade example analysis" have a deeper understanding, might as well to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.