In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "how to understand the relationship between synchronized and lock". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to understand the relationship between synchronized and lock".
How does JVM implement synchronized?
I know we can use the synchronized keyword to lock the program, but I don't know exactly how it is implemented. Don't worry, let's take a look at a demo:
Public class demo {public void synchronizedDemo (Object lock) {synchronized (lock) {lock.hashCode ();}
Above is a demo I wrote, then go to the directory where the class file is located, and use javap-v demo.class to take a look at the compiled bytecode (I intercepted some of it here):
Public void synchronizedDemo (java.lang.Object); descriptor: (Ljava/lang/Object ) V flags: ACC_PUBLIC Code: stack=2, locals=4 Args_size=2 0: aload_1 1: dup 2: astore_2 3: monitorenter 4: aload_1 5: invokevirtual # 2 / / Method java/lang/Object.hashCode: () I 8: pop 9: aload_2 10: monitorexit 11: goto 19 14: astore_3 15: aload_2 16: monitorexit 17: aload_3 18: athrow 19: return Exception table: from to target type 4 11 14 any 14 17 14 any
You should be able to see that when the program declares a block of synchronized code, the compiled bytecode contains monitorenter and monitorexit instructions, which consume a reference type element on the Operand stack (that is, the reference in parentheses of the synchronized keyword) as the lock object to be locked and unlocked. If you look closely, there is a monitorenter instruction and two monitorexit instructions on it, which is the lock acquired by the Java virtual machine to ensure that it can be unlocked either on the normal execution path or on the abnormal execution path.
With regard to monitorenter and monitorexit, it can be understood that each lock object has a lock counter and a pointer to the thread holding the lock:
When the program executes monitorenter, if the counter of the target lock object is 0, it is not occupied by another thread. If a thread requests to use it, the Java virtual machine will be assigned to that thread and add 1 to the value of the counter.
When the target lock object counter is not 0, if the thread held by the lock object is the current thread, the Java virtual machine can increase its counter by 1, if not? I'm sorry, we'll just have to wait and wait for the holding thread to be released.
When monitorexit is executed, the Java virtual machine subtracts the counter of the lock object by 1. When the counter is reduced to 0, the lock is released. If another thread requests it, the request can be successful.
Why do you use this approach? Is to allow the same thread to acquire the same lock repeatedly. For example, if there are many synchronized methods in a Java class, the mutual calls between these methods, whether direct or indirect, involve repeated locking operations on the same lock. If it is designed in this way, this situation can be avoided.
Lock
In Java multithreading, all locks are object-based. That is, every object in Java can be used as a lock. You may have doubts, no, there are class locks. But class objects are also special Java objects, so all locks in Java are object-based.
Before Java6, all locks were "heavyweight" locks, and heavyweight locks caused a problem, that is, if the program frequently acquired lock release locks, it would lead to great performance consumption. In order to optimize this problem, the concepts of "bias lock" and "lightweight lock" are introduced. So in Java6 and later versions, an object has four lock states: no lock state, bias lock state, lightweight lock state, and heavyweight lock state.
Among the four lock states, the unlocked state should be easier to understand. No lock means no lock, and any thread can try to modify it, so I'll go over it here.
With the emergence of competition, lock upgrade is very easy to occur, but if you want to downgrade the lock, the conditions are very harsh, you can come, but you can't.
Ah Fan has a long sentence here: many articles say that locks cannot be degraded if they are upgraded. In fact, in HotSpot JVM, lock demotion is supported.
Lock degradation occurs during Stop The World. When JVM enters the safe point, it will check for idle locks and, if so, will attempt to downgrade.
Some people may be confused when they see Stop The World and security points. I would like to say briefly here that readers need to explore on their own. (because this is the content of JVM, the focus of this article is not JVM.)
In the Java virtual machine, the traditional garbage collection algorithm uses a simple and rough way, that is, Stop-the-world, and this Stop-the-world is realized through the safepoint mechanism. What does the safe point mean? That is, when the Java program executes the native code, if the code does not access the Java object / call the Java method / return to the original Java method, then the stack of the Java virtual machine will not change, which means that the executed local code can be used as a safe point. When the Java virtual machine receives a Stop-the-world request, it waits for all threads to reach the safe point before allowing the thread requesting Stop-the-world to do exclusive work.
Next, we will introduce several locks and lock upgrades.
Java object header
As I said at the beginning, Java locks are all object-based, so how do you tell the program that I am a lock? As far as Java object headers are concerned, every Java object has an object header. If it is a non-array type, use 2 words wide to store the object header, and if it is an array, use 3 words wide to store the object header. In a 32-bit processor, a word width is 32 bits; in a 64-bit processor, the word width is 64 bits ~ the content of the object header is as follows:
Length content description the length of the 32 bitMark Word 64 bitClass Metadata Address pointer stored to the object type data, such as the hashCode or lock information of the 32 bitClass Metadata Address storage object (if it is an array)
Let's mainly look at the contents of Mark Word:
Whether the lock state 29 bit/61 bit1 bit is biased towards the lock 2 bit lock flag bit no lock 001 is biased towards the lock thread ID101 lightweight lock pointer to the lock record in the stack this bit is not used to identify the pointer to the mutex (heavyweight lock) of the heavy lock 00 heavy lock this bit is not used to identify the bias lock 10GC flag at this time this bit is not used to identify the bias lock 11
From the table above, you can see that when the lock is biased, Mark Word stores the lock-biased thread ID; when it is a lightweight lock, Mark Word stores a pointer to the Lock Record in the thread stack; when it is a heavyweight lock, Mark Word stores a pointer to the monitor object in the heap
Bias lock
After a lot of research, the authors of HotSpot have found that in most cases, locks are not only not multithreaded, but are always acquired by the same thread multiple times.
Based on this, the concept of bias lock is introduced.
So what is a bias lock? In vernacular terms, I now set a variable to the lock, and when a thread requests, it is found that the lock is true, that is to say, there is no so-called resource competition at this time, so there is no need to go through the locking / unlocking process, just use it. But if the lock is false, it means that there are other threads competing for resources, so let's follow the regular process.
Take a look at the specific implementation principle:
When a thread enters the synchronization block for the first time, the thread ID with lock bias is stored in the object header and in the lock record in the stack frame. The next time the thread enters the synchronization block, it checks to see if its own thread ID is stored in the lock's Mark Word. If so, it means that the thread has acquired the lock, then the thread does not need to spend CAS operations to lock and unlock when entering and exiting the synchronization block; if not, it means that there is another thread competing for the biased lock, and then it will try to use CAS to replace the thread ID in Mark Word as the ID of the new thread. There are two situations at this point:
If the replacement is successful, it means that the previous thread no longer exists, so the thread ID in Mark Word is the ID of the new thread, and the lock will not be upgraded. It is still a biased lock at this time.
Replacement fails, indicating that the previous thread still exists, then pause the previous thread, set the bias lock flag to 0, and set the lock flag bit to 00, upgrade to lightweight lock, and compete for lock in the way of lightweight lock.
Undo bias lock
Biased locks use a mechanism that waits until competition arises before releasing the lock. In other words, if no one comes to compete with me for the lock, then the lock is unique to me, and when other threads try to compete with me for the biased lock, I will release the lock
When upgrading from a bias lock to a lightweight lock, the thread that owns the bias lock is first suspended and the lock identity is reset. This process seems simple, but it is expensive because:
First, you need to stop the thread that owns the lock at a safe point.
Then traverse the thread stack, if there is a lock record, you need to fix the lock record and Mark Word to become unlocked
Finally, wake up the stopped thread and upgrade the bias lock to a lightweight lock.
You think it's just upgrading a lightweight lock? too young too simple
In the process of upgrading from bias lock to lightweight lock, it is very resource-consuming. If all locks in the application are usually in a competitive state, and the bias lock is an encumbrance at this time, you can turn off the bias lock by JVM parameter:-XX:-UseBiasedLocking=false, then the program will enter the lightweight lock state by default.
Finally, let's take a picture.
Lightweight lock
If multiple threads acquire the same lock at different times, that is, there is no lock competition, then JVM will use lightweight locks to avoid thread blocking and waking up.
Lightweight lock plus lock
JVM creates a space for each thread to store lock records in the stack frame of the current thread, called Displaced Mark Word. If a thread acquires a lock and finds it is lightweight, it copies the lock's Mark Word to its own Displaced Mark Word. The thread then attempts to replace the lock's Mark Word with a pointer to the lock record with CAS.
If the replacement succeeds and the current thread acquires the lock, the whole state is still a lightweight lock state
What if the replacement fails? If the Mark Word is replaced with the lock record of another thread, try to use spin to acquire the lock. (spin means that the thread is constantly trying to acquire the lock, usually using a loop.)
Spinning consumes CPU. If you can't get the lock all the time, the thread will spin all the time, and the precious resources of CPU will be wasted.
The easiest way to solve this problem is to specify the number of spins, for example, if the replacement is not successful, cycle 10 times, and if you have not got it, you will enter the blocking state.
But JDK takes a more ingenious approach-adaptive spin. That is to say, if the thread spin is successful this time, then I will spin more times next time, because the success of my spin this time means that I still have a high probability of success, and the next spin will be more, so if the spin fails, the next time I spin will be less, for example, if I have seen the precursor of failure, then I will slip first, rather than "don't hit the wall and don't look back."
After a spin failure, the thread blocks and the lock is upgraded to a heavyweight lock
Lightweight lock release:
When the lock is released, the current thread uses the CAS operation to copy the contents of the Displaced Mark Word into the lock's Mark Word. If there is no contention, the replication operation will succeed; if another thread escalates a lightweight lock to a heavyweight lock because of multiple spins, the CAS operation will fail, which releases the lock and wakes up the blocked process
Again, let's take a picture:
Weight lock
Heavy locking depends on the mutex (mutex) of the operating system. However, the state transition between threads in the operating system takes a relatively long time (because the operating system needs to switch from user mode to kernel mode, which is very expensive), so the heavyweight lock efficiency is very low, but one thing is that blocked threads do not consume CPU.
Each object can be treated as a lock, so what happens when multiple threads request an object lock at the same time?
The object lock sets the centralized state to distinguish the requesting thread:
Contention List: all threads requesting a lock will be placed in the contention queue first
Entry List: threads in Contention List that are eligible to be candidates are moved to Entry List
Wait Set: threads that are blocked by calling the wait method will be placed in the Wait Set
OnDeck: at most one thread is competing for locks at any one time. This thread is called OnDeck.
Owner: the thread that acquired the lock is called Owner
! Owner: thread that releases the lock
When a thread tries to acquire a lock, if the lock is occupied, it encapsulates the thread as an ObjectWaiter object and inserts it into the head of the Contention List queue, and then calls the park function to suspend the current thread
When a thread releases the lock, it picks a thread from Contention List or Entry List to wake up
If a thread calls the Object.wait method after acquiring the lock, it will put the thread into WaitSet, and when awakened by Object.notify, it will move the thread from WaitSet to Contention List or Entry List.
However, when calling the wait or notify method of a lock object, if the current state of the lock is biased or lightweight, it will first expand to a heavyweight lock
Thank you for your reading, the above is the content of "how to understand the relationship between synchronized and lock". After the study of this article, I believe you have a deeper understanding of how to understand the relationship between synchronized and lock, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.