Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the knowledge points of locks in java?

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

Today, Xiaobian to share with you what knowledge points are related to java lock knowledge points, detailed content, clear logic, I believe most people still know too much about this knowledge, so share this article for everyone to refer to, I hope you have some gains after reading this article, let's learn about it together.

Optimistic lock and pessimistic lock pessimistic lock

Pessimism locks correspond to pessimistic people in life, pessimistic people always think that things are going in the wrong direction.

For example, if there is only one pit in the toilet, pessimism locks the toilet and locks the door at the first time, so that other people can only wait outside the door, this state is "blocked".

Back in the code world, a shared data with a pessimistic lock, that thread every time before trying to operate on this data will assume that other threads may also operate on this data, so every operation will be locked before, so that other threads want to operate on this data can not get the lock can only block.

Synchronized and ReentrantLock are typical pessimistic locks in Java, and some container classes using synchronized keywords such as HashTable are also pessimistic lock applications.

optimistic locking

Optimism locks correspond to optimistic people in life, optimistic people always think things are going in the right direction.

For example, if there is only one pit in the toilet, optimistic lock thinks: this wilderness, there is no one, no one will grab my pit, every time the door is locked, it is a waste of time, or not locked. You see, optimism locks are born optimistic!

Back in the code world, optimistic locks operate on data without locking, and update time determines whether other threads are updating the data in the meantime.

Optimistic locking can be implemented using version numbering mechanisms and CAS algorithms. The atomic classes under the java.util.concurrent.atomic package in the Java language are implemented using CAS optimistic locks.

Two types of lock usage scenarios

Pessimism locks and optimism locks are not superior or inferior, and have their own adaptation scenarios.

Optimistic locking is suitable for scenarios with few writes (small conflicts), because there is no need to lock and release locks, eliminating the overhead of locking and thus improving throughput.

If it is a scenario where there are more writes and less reads, that is, conflicts are serious, and competition between threads is motivated, using optimistic locks will cause threads to retry continuously, which may also reduce performance. It is more appropriate to use pessimistic locks in this scenario.

Exclusive locks and shared locks

exclusive lock

An exclusive lock is one that can only be held by one thread at a time. If one thread places an exclusive lock on data, no other thread can place any type of lock on that data. A thread that acquires an exclusive lock can read and modify data.

The synchronized and java.util.concurrent(JUC) packages in JDK implement Lock classes as exclusive locks.

shared lock

A shared lock is one that can be held by multiple threads. If a thread places a shared lock on data, other threads can only place shared locks on the data, not exclusive locks. The thread that acquires the shared lock can only read the data and cannot modify it.

ReentrantReadWriteLock is a shared lock in JDK.

Mutex locks and read-write locks

mutex

Mutex lock is a conventional implementation of exclusive lock, which means that only one visitor is allowed to access a resource at the same time, and it has uniqueness and exclusivity.

Mutex lock Only one thread can have mutex lock at a time, other threads only wait.

read-write lock

Read-write lock is a concrete implementation of shared lock. Read-write locks manage a set of locks, one for read-only locks and one for write locks.

Read locks can be held by multiple threads simultaneously without write locks, whereas write locks are exclusive. Write locks have higher priority than read locks, and a thread that acquires a read lock must be able to see what was updated by the previously released write lock.

Read-write locks are more concurrent than mutex locks, with only one write thread at a time, but multiple threads can read concurrently.

An interface for read-write locks is defined in JDK: ReadWriteLock

public interface ReadWriteLock { /** * obtaining a read lock */ Lock readLock(); /** * obtaining a write lock */ Lock writeLock();}

ReentrantReadWriteLock implements the ReadWriteLock interface. The specific implementation will not be expanded here. The source code analysis will be further developed later.

Fair lock and unfair lock

Fairlock

Fair lock means that multiple threads obtain locks in the order they apply for locks. Here, it is similar to queuing to buy tickets. The first person to buy first, and the later person is at the end of the queue. This is fair.

Fair locks can be initialized in java via constructors

/** * Create a reentrant lock, true for fair lock and false for unfair lock. */Lock lock = new ReentrantLock(true);

unfair lock

An unfair lock means that multiple threads acquire locks in a different order than they apply for locks. It is possible that the thread that applies later acquires locks first than the thread that applies first. In a highly concurrent environment, it may cause priority flip or starvation (a thread has not been able to obtain locks).

In Java synchronized keyword is unfair lock, ReentrantLock default is unfair lock.

/** * Create a reentrant lock, true for fair lock and false for unfair lock. Default unfair lock */Lock lock = new ReentrantLock(false); reentrant lock

A reentrant lock, also known as a recursive lock, means that the same thread acquires the lock on the outer method and automatically acquires the lock on the inner method.

For Java ReentrantLock, its name is a reentrant lock. For Synchronized, it is also a reentrant lock.

Blackboard Knock: One benefit of reentrant locks is that deadlocks can be avoided to some extent.

Take synchronized as an example, look at the following code:

public synchronized void mehtodA() throws Exception{ // Do some magic tings mehtodB();}public synchronized void mehtodB() throws Exception{ // Do some magic tings}

In the above code, methodA calls methodB. If a thread calls methodA to acquire the lock, it does not need to acquire the lock again. This is the reentrant lock feature. MehtoDB may not be executed by the current thread if it is not a reentrant lock, possibly causing deadlock.

spinlock

A spin lock is when a thread is not suspended directly without acquiring a lock, but executes a busy loop called spin.

The purpose of spin locks is to reduce the chance that threads will be suspended, because suspending and waking threads are also resource-intensive operations.

If the lock is held by another thread for a long time, the current thread will be suspended even after spinning, and the busy loop will become a waste of system resources and reduce overall performance. Thus spin locks are not suited to concurrency situations where lock occupancy times are long.

In Java, AtomicInteger class has spin operation, let's look at the code:

public final int getAndAddInt(Object o, long offset, int delta) { int v; do { v = getIntVolatile(o, offset); } while (! compareAndSwapInt(o, offset, v, v + delta)); return v;}

CAS operations fail and loop to get the current value and try again.

Adaptive spin locks also need to be understood.

In JDK 1.6, adaptive spin was introduced, which is smarter, with spin times no longer fixed, but determined by the spin time of the previous lock on the same lock and the state of the lock owner. If the virtual machine thinks that this spin is likely to succeed again, it will take more time. If the spin rarely succeeds, it may skip the spin process and avoid wasting processor resources.

segmented lock

A segmented lock is a lock design, not a specific lock.

Segmented locks are designed to further refine the granularity of the lock, locking only one item in the array when the operation does not require updating the entire array.

In Java language CurrentHashMap on the bottom of the segment lock, the use of Segment, you can use concurrent.

Lock upgrade (no lock| biased locking| lightweight lock| heavyweight lock)

JDK 1.6 introduced four lock states to improve performance and reduce the cost of acquiring and releasing locks: no locks, biased locks, lightweight locks, and heavyweight locks, which escalate with multithreaded contention but cannot be degraded.

lock-free

The lockless state is actually the optimistic lock mentioned above, so I won't repeat it here.

biased locking

Java Biased Locking means that it will bias the thread that accesses the lock first. If there is only one thread accessing the locked resource during the running process, and there is no multi-thread competition, then the thread does not need to acquire the lock repeatedly. In this case, it will add a biased lock to the thread.

The implementation of bias lock is realized by controlling the flag bit of the object Mark Word. If the current state is biasable, it is necessary to further determine whether the thread ID stored in the object header is consistent with the current thread ID. If it is consistent, enter directly.

lightweight lock

When thread contention becomes more intense, biased locks are upgraded to lightweight locks, which assume that although contention exists, ideally the level of contention is low, and wait for the previous thread to release the lock by spinning.

heavyweight lock

If thread concurrency increases further, threads spin more than a certain number of times, or if one thread holds the lock, one thread spins, and a third thread accesses (competition continues to increase anyway), lightweight locks inflate to heavyweight locks, which block all threads except the thread that holds the lock at this time.

Upgrade to heavyweight lock is actually a mutex lock, a thread to get the lock, the rest of the threads will be in a blocking waiting state.

In Java, the internal implementation principle of synchronized keyword is the process of lock escalation: no lock--> biased lock--> lightweight lock--> heavyweight lock. This process will be explained in detail later when explaining the principle of synchronized keyword.

Lock optimization techniques (lock coarsening, lock elimination)

lock coarsening

Lock coarsening is to reduce the number of multiple synchronization blocks and expand the scope of a single synchronization block, essentially combining multiple locking and unlocking requests into one synchronization request.

For example, there is a code synchronization block in the body of a loop, and each loop performs a lock and unlock operation.

private static final Object LOCK = new Object();for(int i = 0;i < 100; i++) { synchronized(LOCK){ // do some magic things }}

After the lock is coarsened, it looks like this:

synchronized(LOCK){ for(int i = 0;i < 100; i++) { // do some magic things }}

lock elimination

Lock elimination means that the virtual machine compiler detects uncontested locks on shared data at runtime and eliminates these locks.

Let me give you an example to understand better.

public String test(String s1, String s2){ StringBuffer stringBuffer = new StringBuffer(); stringBuffer.append(s1); stringBuffer.append(s2); return stringBuffer.toString();}

There is a test method in the above code, which is mainly used to concatenate the string s1 and the string s2.

The three variables s1, s2, stringBuffer in the test method are all local variables, local variables are on the stack, and the stack is thread private, so even if there are multiple threads accessing the test method, it is thread safe.

We all know that StringBuffer is a thread-safe class, append method is a synchronous method, but test method is thread-safe, in order to improve efficiency, the virtual machine helps us eliminate these synchronous locks, this process is called lock elimination.

StringBuffer.class// append is synchronized public synchronized StringBuffer append(String str) { toStringCache = null; super.append(str); return this;} The above is all the contents of this article,"What are the knowledge points of Java locks?" Thank you for reading! I believe everyone has a great harvest after reading this article. Xiaobian will update different knowledge for everyone every day. If you want to learn more knowledge, please pay attention to the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report