In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to understand the concurrency lock of Java". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "how to understand Java's concurrent locks".
There are two types of concurrent locks in Java: implicit locks and explicit locks. Implicit lock is the most commonly used synchronized keyword. Explicit lock mainly consists of two interfaces: Lock and ReadWriteLock. The main implementation classes are ReentrantLock and ReentrantReadWriteLock, both of which are based on AQS (AbstractQueuedSynchronizer). CAS is also referred to as a lock, and CAS plays an important role in many concurrency-related classes, including AQS.
We just need to understand the principles of synchronized and AQS, and then understand the nature and limitations of concurrent locks. Therefore, this article focuses on the principle, and will not cover too much about the use and features.
Concept discrimination
Here are some conceptual explanations about locks, which are descriptions of the nature of locks, not concrete implementations.
Pessimistic lock and optimistic lock
Pessimistic lock and exclusive lock have the same meaning, it assumes that there must be a conflict, so when the lock is acquired, it blocks other waiting threads. The advantage of this is simplicity and security, but both the suspended thread and the recovery thread need to be transferred to the kernel state, which can incur a lot of performance overhead. The representative of pessimistic lock is synchronized. In the real world, however, there is no conflict most of the time. Pessimistic locks can cause a lot of waste. The optimistic lock is different, it assumes that there will be no conflict, try to perform an operation first, and then do other processing after failure (usually retry in a loop). This lock does not block other threads, does not involve context switching, and has low performance overhead. The representative implementation is CAS.
Fair lock and unfair lock
Fair lock means that each thread checks whether there are queued threads before adding the lock and obtains the lock according to the queuing order. Unfair lock means that the thread does not consider the queuing problem before adding the lock, directly attempts to acquire the lock, and then queues at the end of the queue. It is worth noting that in the implementation of AQS, once a thread is queued, even if it is an unfair lock, the thread has to queue up obediently.
Reentrant lock and non-reentrant lock
If a thread has acquired a lock, it can access all blocks of code locked by that lock. The non-reentrant lock is the opposite.
Synchronized keyword
Synchronized is an exclusive lock. When decorating a static method, it is the class object that is locked, such as Object.class. When decorating a non-static method, the object is locked, that is, this. When you decorate a method block, you lock the object in parentheses. Each object has a lock and a wait queue, the lock can only be held by one thread, and other threads that need to lock need to block waiting. After the lock is released, the object takes one from the queue and wakes up. Which thread is awakened is uncertain and fairness is not guaranteed.
Class lock and object lock
When synchronized decorates a static method, it locks the class object, such as Object.class. When decorating a non-static method, the object is locked, that is, this. Multiple threads can execute the same synchronized instance method at the same time, as long as they access different objects.
Synchronized locks objects rather than code, and as long as you access synchronized methods of the same object, even different code will be accessed sequentially synchronously.
In addition, it is important to note that the synchronized method does not prevent non-synchronized methods from being executed at the same time, so generally when protecting a variable, you need to add synchronized to all methods that access the variable.
Realization principle
Synchronized is implemented based on Java object header and Monitor mechanism.
Java object header
An object contains three parts in memory: object header, instance data, and alignment padding. The Java object header contains two parts:
Class Metadata Address (type pointer). A pointer that stores the metadata of the class. The virtual machine uses this pointer to find out which class it is an instance of.
Mark Word (tag field). Save some data when the object itself is running. Including hash code, GC generation age, lock status mark and so on.
Monitor
Mark Word has a field that points to the monitor object. Monitor records information such as the holding thread of the lock, the waiting thread queue and so on. Each object mentioned earlier has a lock and a waiting queue, which is implemented here. The monitor object is implemented by C++. There are three key fields:
_ owner records the thread that currently holds the lock
_ EntryList is a queue that records all threads blocking waiting for a lock
_ WaitSet is also a queue that records threads that have not been notified of calling the wait () method.
The operation mechanism of Monitor is as follows:
When multiple threads compete for locks, they enter the EntryList queue first. The successful thread is marked as Owner. Other threads continue to block waiting in this queue.
If the Owner thread calls the wait () method, it releases the object lock and enters the WaitSet waiting to be woken up. Owner is left empty and threads in EntryList compete for locks again.
If the Owner thread finishes executing, the lock is released, the Owner is left empty, and the thread in the EntryList competes for the lock again.
The treatment of synchronized by JVM
Now that you've seen the mechanism of monitor, how does a virtual machine associate synchronized with monitor? There are two situations:
If the code block is synchronized, the monitorenter instruction is added directly before the synchronized code block and the monitorexit instruction is added after the code block. This is called display synchronization.
If the method is synchronized, the virtual machine sets the ACC_SYNCHRONIZED flag for the method. When called, JVM determines whether it is a synchronous method based on this flag.
Optimization of synchronized by JVM
Synchronized is a heavyweight lock, which is optimized by the virtual machine because it is too expensive.
Spin lock and adaptive spin
In many applications, the locked state only lasts for a short time, and it is not worth it to suspend the recovery thread for such a small amount of time. We can let the waiting thread execute a certain number of loops to acquire the lock in the loop. This technology, called spin locking, saves the consumption of system switching threads, but still takes up processors. In JDK1.4.2, the number of selections can be controlled by parameters. JDK 1.6 also introduces an adaptive spin lock, which is no longer limited by the number of times, but by the spin time of the previous time on the same lock and the state of the lock owner.
Lock elimination
When the virtual machine is running, if it is found that there can be no shared data in a piece of locked code, the lock will be cleared.
Lock coarsening
When the virtual machine detects that a series of piecemeal operations lock the same object, the lock is extended to the outside of the entire operation sequence. Such as the append operation of StringBuffer.
Lightweight lock
For most locks, there is no competition for the entire synchronization cycle. If there is no competition, lightweight locks can use CAS operations to avoid the overhead of mutexes.
Bias lock
The core idea of the biased lock is that if a thread acquires the lock, the lock enters the biased mode, and when the thread requests the lock again, the lock can be acquired without any synchronization.
CAS
Operation model
CAS is short for compare and swap, that is, compare and exchange. It refers to an operating mechanism, not a specific class or method. This operation is packaged on the Java platform. In the Unsafe class, the calling code is as follows:
Unsafe.compareAndSwapInt (this, valueOffset, expect, update)
Copy the code
It requires three parameters, namely the memory location V, the old expected value An and the new value B. When operating, the value is first read from the memory location, and then compared with the expected value A. If equal, change the value of this memory location to the new value B and return true. If it is not equal and the description conflicts with other threads, no change is made and false is returned.
This mechanism avoids concurrency conflicts without blocking other threads and is much better than exclusive locks. CAS is widely used in atomic classes and concurrency packages of Java.
Retry mechanism (cyclic CAS)
There are a lot of articles saying that a CAS operation fails and retries until it succeeds, which is very lax.
First, CAS itself does not implement the post-failure handling mechanism, it is only responsible for returning the Boolean value of success or failure, and the caller will handle it later. It's just that the most common way to deal with it is to retry.
Second, this sentence is easily misunderstood and is understood as a re-comparison and exchange. In fact, when the failure, the original value has been modified, if you do not change the expected value, no matter how the comparison will fail. The new value also needs to be modified.
So the correct way is to use a dead loop for the CAS operation. If it succeeds, it ends the loop and returns, and if it fails, it re-reads the value from memory and calculates the new value, and then calls CAS. Take a look at the source code of AtomicInteger and you will understand everything:
Public final int incrementAndGet () {
For (;;) {
Int current = get ()
Int next = current + 1
If (compareAndSet (current, next))
Return next;}}
Underlying implementation
CAS is mainly divided into three steps, read-compare-modify. The comparison is to detect whether there is a conflict, and if no conflict is detected, other threads can modify this value, then CAS still cannot guarantee correctness. So the most important thing is to ensure the atomicity of the comparison-modification of these two steps.
The underlying CAS is accomplished by calling the cmpxchg of the CPU instruction set, which is the compare and exchange instruction in x86 and Intel architectures. In the case of multi-core, this instruction does not guarantee atomicity, so it needs to be preceded by a lock instruction. The lock instruction ensures that a CPU core monopolizes an area of memory during the operation. So how does this happen?
In a processor, there are generally two ways to achieve these effects: bus locks and cache locks. In the architecture of multi-core processors, the CPU core can not access memory directly, but through a single bus. A bus lock is to lock this bus so that other cores cannot access memory. This approach is too expensive and will cause other cores to stop working. The cache lock does not lock the bus, but only locks a certain part of the memory area. When a CPU core reads the data from the memory region into its own cache, it locks the memory area corresponding to the cache. During locking, other cores cannot operate this area of memory.
It is in this way that CAS implements the atomicity of comparison and exchange operations. It is worth noting that CAS only guarantees the atomicity of the operation, not the visibility of variables, so variables need to be added with the volatile keyword.
ABA problem
As mentioned above, CAS ensures the atomicity of comparison and exchange. However, during the period from reading to the beginning of the comparison, other cores can still modify this value. It can be judged if the core changes A to BMageCAS. But if the core changes A to B and then back to A. Then CAS will assume that the value has not been changed and continue to operate. This is not in line with the actual situation. The solution is to add a version number.
Reentrant lock ReentrantLock
ReentrantLock uses code to implement the same semantics as synchronized, including reentrant, memory visibility and race condition resolution. Compared with synchronized, it also has the following advantages:
Support for acquiring locks in a non-blocking manner
Can respond to interrupts
There can be a time limit
Fair lock and unfair lock are supported
The basic usage is as follows:
Public class Counter {
Private final Lock lock = new ReentrantLock ()
Private volatile int count; public void incr () {
Lock.lock ()
Try {
Count++
} finally {
Lock.unlock ()
}
}
Public int getCount ()
{
Return count
}
}
Within ReentrantLock, there are two inner classes, FairSync and NoFairSync, corresponding to fair locks and unfair locks. They all inherit from Sync. Sync inherits from AQS.
AQS
AQS is called AbstractQueuedSynchronizer. There are two important members in AQS:
Member variable state. Used to represent the current state of the lock, decorated with volatile to ensure memory consistency. At the same time, all operations on state are done using CAS. A state of 0 means that no thread holds the lock. After holding the lock, the thread increases the state by 1 and subtracts it by 1 when released. If you hold and release many times, you will add or decrease many times.
There is also a two-way linked list, in addition to the header node, each node records the information of the thread, representing a waiting thread. This is a linked list of FIFO.
Let's take a look at how AQS works with the code for ReentrantLock unfair locks.
Request lock
There are three possibilities when requesting a lock:
If no thread holds the lock, the request succeeds and the current thread acquires the lock directly.
If the current thread already holds a lock, use CAS to increase the state value by 1, indicating that he has applied for the lock again, minus 1 when the lock is released. This is the realization of reentrancy.
If the lock is held by another thread, add yourself to the waiting queue.
Final void lock () {
If (compareAndSetState (0,1))
SetExclusiveOwnerThread (Thread.currentThread ()); / / when no thread holds the lock, acquire the lock directly, corresponding to case 1
Else
Acquire (1)
}
Public final void acquire (int arg) {
If (! TryAcquire (arg) & & / / in this method will determine whether the current holding thread is equal to itself, corresponding to case 2
AcquireQueued (addWaiter (Node.EXCLUSIVE), arg) / / add yourself to the queue, corresponding to situation 3
SelfInterrupt ()
}
Create a Node node and add to the linked list
If there is no competition to the lock, it is time to enter the waiting queue. The queue has a head node by default and does not contain thread information. In case 3 above, addWaiter creates a Node and adds it to the end of the linked list, where the Node holds a reference to the current thread. There is also a member variable waitStatus, which represents the wait state of the thread, with an initial value of 0. We also need to focus on two values:
CANCELLED, with a value of 1, indicates a cancellation status, which means I don't want the lock. Please move me out.
SINGAL, with a value of-1, indicates that the next node is waiting. Note that it is the next node, not the current node.
At the same time, the operation added to the end of the linked list uses the mode of CAS+ endless loop, which is very representative. Take a look at it:
Node node = new Node (mode)
For (;;) {
Node oldTail = tail
If (oldTail! = null) {
U.putObject (node, Node.PREV, oldTail)
If (compareAndSetTail (oldTail, node)) {
OldTail.next = node
Return node
}
} else {
InitializeSyncQueue ()
}
}
As you can see, the method of CAS is called in an endless loop. If multiple threads call the method at the same time, only one thread executes successfully in each loop, and the other threads enter the next loop and call it again. N threads will cycle N times. In this way, the concurrency model is implemented in lock-free mode.
Suspend and wait
If the last node of this node is a header node, try to acquire the lock again, remove it and return when you get it. If you can't get it, move on to the next step.
Judge the waitStatus of the previous node, if it is SINGAL, return true, and call LockSupport.park () to suspend the thread
If it is CANCELLED, remove the previous node
If it is another value, mark the waitStatus of the previous node as SINGAL and enter the next loop.
As you can see, a thread has at most two opportunities, and before it is competitive, it will suspend and wait.
Final boolean acquireQueued (final Node node, int arg) {
Try {
Boolean interrupted = false
For (;;) {
Final Node p = node.predecessor ()
If (p = = head & & tryAcquire (arg)) {
SetHead (node)
P.next = null; / / help GC
Return interrupted
}
If (shouldParkAfterFailedAcquire (p, node) & &
ParkAndCheckInterrupt ()
Interrupted = true
}
} catch (Throwable t) {
CancelAcquire (node)
Throw t
}
}
Release lock
Call tryRelease, which is implemented by a subclass. The implementation is very simple: if the current thread is the thread holding the lock, subtract the state by 1. If state is greater than 0 after subtraction, the current thread still holds the lock and returns false. If it is equal to 0, there is no thread holding the lock. Return true and proceed to the next step.
If the waitStatus of the header node is not equal to 0, call LockSupport.unpark () to wake up its next node. The next node of the header node is the first thread in the waiting queue, which reflects the first-in-first-out characteristic of AQS. In addition, even if it is an unfair lock, after entering the queue, it still has to be done in order.
Public final boolean release (int arg) {
If (tryRelease (arg)) {/ / minus state by 1
Node h = head
If (h! = null & & h.waitStatus! = 0)
UnparkSuccessor (h)
Return true
}
Return false
}
Private void unparkSuccessor (Node node) {
Int ws = node.waitStatus
If (ws
< 0) node.compareAndSetWaitStatus(ws, 0); Node s = node.next; if (s == null || s.waitStatus >0) {
S = null
For (Node p = tail; p! = node & & p! = null; p = p.prev)
If (p.waitStatus
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.