Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand the concurrency problem of multithreading

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly explains "how to understand the concurrency problem of multithreading". The content of the explanation in the article is simple and clear, and it is easy to learn and understand. let's study and learn "how to understand the concurrency problem of multithreading".

First, why do multithreads have concurrency problems?

Why is there a concurrency problem when multiple threads access (read and write) the same variable at the same time?

The Java memory model specifies that all variables are stored in main memory, and each thread has its own working memory.

The working memory of the thread keeps a copy of the main memory copy of the variables used in the thread, and all operations on the variables must be carried out in the working memory, rather than reading and writing the main memory directly.

The thread accesses a variable, first copying the variable from the main memory to the working memory, and writing to the variable is not immediately synchronized to the main memory.

There is no direct access to variables in each other's working memory between different threads, and the transfer of variables between threads requires data synchronization between their own working memory and main memory.

2. Java memory model (JMM)

The Java memory model (JMM) acts as a data synchronization process between working memory (local memory) and main memory. It specifies how and when to do data synchronization, as shown in the following figure.

III. Three elements of concurrent programming

Atomicity: in an operation, CPU cannot pause and reschedule, that is, the operation is not interrupted, either it is completed or it is not executed.

Visibility: when multiple threads access the same variable, one thread modifies the value of the variable, and other threads can see the modified value immediately.

Orderliness: the order in which the program is executed is in the order of the code.

Third, what can be done to solve the problem of anti-concurrency? (key points)

The following analyzes the ways to solve the concurrency problem in combination with different scenarios.

I. volatile1.1 volatile characteristics

Ensure visibility, not atomicity

When writing a volatile variable, JVM forces the variables of local memory to be flushed to main memory

This write operation invalidates the cache in other threads, and other threads read from main memory. The write operation of volatile is visible to other threads in real time.

Forbidding instruction reordering refers to a means by which compilers and processors sort instructions in order to optimize program performance and need to follow certain rules:

Instructions that have dependencies will not be reordered, for example, a = 1; b = a; an and b have dependencies and will not be reordered

The result of execution under a single thread cannot be affected. For example, the first two operations can be reordered, but c=a+b will not be reordered, because to ensure that the result is 3

1.2 usage scenarios

For a variable, only one thread performs the write operation, and the other threads are all read operations, so you can decorate the variable with volatile.

1.3.Why should volatile be used in a single case of double lock? Public class TestInstance {private static volatile TestInstance mInstance;public static TestInstance getInstance () {/ / 1 if (mInstance = = null) {/ / 2 synchronized (TestInstance.class) {/ / 3 if (mInstance = = null) {/ / 4 mInstance = new TestInstance (); / / 5} return mInstance;} copy the code

}

If volatile is not used, there will be problems in concurrency. When thread An executes to comment 5 new TestInstance (), it is divided into the following steps:

Allocate memory

Initialize object

MInstance points to memory

At this time, if the instruction rearrangement occurs, the execution order is 132. at the end of the third execution, thread B happens to come in and execute to comment 2, which determines that the mInstance is not empty and directly uses an uninitialized object. So use the volatile keyword to disable instruction reordering.

1.4 volatile principle

At the bottom of JVM, volatile is implemented using a memory barrier, which provides three functions:

It ensures that when instructions are reordered, the instructions behind them are not placed in front of the memory barrier, nor do they put the previous instructions behind the memory barrier; that is, when the instruction is executed to the memory barrier, all the operations in front of it have been completed.

It forces cached modifications to be written to main memory immediately.

The write operation invalidates the cache lines in other CPU, after which the read operations of other threads are read from main memory.

Limitations of 1.5 volatile

* * volatile can only guarantee visibility, not atomicity. * * write operations are visible to other threads, but cannot solve the problem of multiple threads writing at the same time.

II. Synchronized2.1 Synchronized usage scenarios

Multiple threads write a variable at the same time.

For example, if you sell tickets, the remaining tickets are 100. window An and window B each sell one ticket at the same time. If the remaining ticket variable is modified with volatile, it will be problematic.

Window A gets the rest of the ticket, window B also gets the remaining ticket. Window A sells one ticket to 99 and refreshes it back to the main memory. At the same time, B sells one ticket to 99 and flushes back to the main memory, resulting in 99 tickets instead of 98 tickets in the main memory.

The limitation of volatile mentioned earlier is the situation in which multiple threads write at the same time, in which case you can generally use Synchronized.

Synchronized ensures that only one thread can execute a method or a block of code at a time.

2.2 Synchronized principle public class SynchronizedTest {public static void main (String [] args) {synchronized (SynchronizedTest.class) {System.out.println ("123");} method ();} private static void method () {}} copy the code

Compile this code with the javac command, and then check the bytecode with the java p-v SynchronizedTest.class command. Some of the bytecode is as follows

Public static void main (java.lang.String []); descriptor: (Ljava/lang/String;) Vflags: ACC_PUBLIC, ACC_STATICCode: stack=2, locals=3, args_size=1 0: ldc # 2 / / class com/lanshifu/opengldemo/test/SynchronizedTest 2: dup 3: astore_1 4: monitorenter 5: getstatic # 3 / / Field java/lang/System.out:Ljava/io/PrintStream; 8: ldc # 4 / / String 123 10: invokevirtual # 5 / / Method java/io/PrintStream.println: (Ljava/lang/String ) V 13: aload_1 14: monitorexit 15: goto 23 18: astore_2 19: aload_1 20: monitorexit 21: aload_2 22: athrow 23: invokestatic # 6 / / Method method: () V 26: return copy Code

You can see 4: monitorenter and 14: monitorexit, with printed statements in the middle.

To execute the synchronous code block, the monitorenter instruction is executed first, then the code in the synchronous code block is executed, and the monitorexit instruction is executed when you exit the synchronous code block.

Using Synchronized for synchronization, the key is that you must get the monitor monitor of the object, and when the thread gets the monitor, it can continue to execute, otherwise it will enter the synchronization queue, and the thread state will become BLOCK. At the same time, only one thread can get the monitor. When the monitor hears that monitorexit is called, there is a thread out of the queue to get the monitor.

Each object has a counter, and when the thread acquires the object lock, the counter is incremented by one, and when the lock is released, the counter is reduced by one, so as long as the counter of this lock is greater than 0, other thread access can only wait.

2.3 upgrade of Synchronized lock

The understanding of Synchronized may be heavyweight locks, but after Java1.6 has made various optimizations to Synchronized, in some cases it is not so heavy, the biased locks and lightweight locks introduced in Java1.6 to reduce the performance consumption caused by acquiring locks and releasing locks.

Biased locks: in most cases, locks are not only not multithreaded, but are always acquired by the same thread multiple times. Biased locks are introduced to make it cheaper for threads to acquire locks.

When a thread An accesses a synchronized block of code, the id of the current thread is stored in the object header. When the thread enters and exits the synchronized block of code, there is no need to lock and release the lock again.

Lightweight lock: in the case of biased lock, if thread B also accesses the synchronous code block, the thread id of the comparison object header is different, it will be upgraded to a lightweight lock, and the lightweight lock will be acquired by spinning.

Heavy lock: if thread An and thread B access the synchronous code block at the same time, the lightweight lock will be upgraded to a heavy lock. When thread An acquires the heavy lock, thread B can only wait in the queue and enter the BLOCK state.

2.4 shortcomings of Synchronized

Cannot set lock timeout

The lock cannot be released through code

It is easy to cause deadlock

III. ReentrantLock

The disadvantages mentioned above are that Synchronized cannot set the lock timeout and cannot release the lock through code. ReentranLock can solve this problem.

Where there are multiple conditional variables and highly competitive locks, it is more appropriate to use ReentrantLock. ReentrantLock also provides Condition, and operations such as waiting and waking up threads are more flexible. A ReentrantLock can have multiple Condition instances, so it is more scalable.

3.1 use of ReentrantLock

Lock and unlock

ReentrantLock reentrantLock = new ReentrantLock (); System.out.println ("reentrantLock- > lock"); reentrantLock.lock (); try {System.out.println ("sleep for 2 seconds..."); Thread.sleep (2000);} catch (InterruptedException e) {e.printStackTrace ();} finally {reentrantLock.unlock (); System.out.println ("reentrantLock- > unlock");} copy code

Implement timed lock requests: tryLock

Public static void main (String [] args) {ReentrantLock reentrantLock = new ReentrantLock (); Thread thread1 = new Thread_tryLock (reentrantLock); thread1.setName ("thread1"); thread1.start (); Thread thread2 = new Thread_tryLock (reentrantLock); thread2.setName ("thread2"); thread2.start ();} static class Thread_tryLock extends Thread {ReentrantLock reentrantLock; public Thread_tryLock (ReentrantLock reentrantLock) {this.reentrantLock = reentrantLock } @ Override public void run () {try {System.out.println ("try lock:" + Thread.currentThread (). GetName ()); boolean tryLock = reentrantLock.tryLock (3, TimeUnit.SECONDS); if (tryLock) {System.out.println ("try lock success:" + Thread.currentThread (). GetName ()); System.out.println ("Sleep:" + Thread.currentThread (). GetName ()); Thread.sleep (5000) System.out.println ("awake:" + Thread.currentThread (). GetName ());} else {System.out.println ("try lock timeout:" + Thread.currentThread (). GetName ());}} catch (InterruptedException e) {e.printStackTrace ();} finally {System.out.println ("unlock:" + Thread.currentThread (). GetName ()); reentrantLock.unlock ();} copy code

Printed log:

Try lock:thread1try lock:thread2try lock success: thread2 sleep: thread2try lock timeout: thread1unlock:thread1Exception in thread "thread1" java.lang.IllegalMonitorStateException at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease (ReentrantLock.java:151) at java.util.concurrent.locks.AbstractQueuedSynchronizer.release (AbstractQueuedSynchronizer.java:1261) at java.util.concurrent.locks.ReentrantLock.unlock (ReentrantLock.java:457) at com.lanshifu.demo_module.test.lock. ReentranLockTest$Thread_tryLock.run (ReentranLockTest.java:60) wakes up: thread2unlock:thread2 copy code

The use of trtLock is demonstrated above. TrtLock sets the waiting time for acquiring the lock. If the failure is returned in more than 3 seconds, you can see the result in the log. There is an exception because thread1 failed to acquire the lock and unlock should not be called.

3.2 Condition conditional public static void main (String [] args) {Thread_Condition thread_condition = new Thread_Condition (); thread_condition.setName ("Thread testing Condition"); thread_condition.start (); try {Thread.sleep (2000);} catch (InterruptedException e) {e.printStackTrace ();} thread_condition.singal ();} static class Thread_Condition extends Thread {@ Override public void run () {await ();} private ReentrantLock lock = new ReentrantLock () Public Condition condition = lock.newCondition (); public void await () {try {System.out.println ("lock"); lock.lock (); System.out.println (Thread.currentThread (). GetName () + ": I am waiting for notification."); condition.await (); / / await and signal correspond to / / condition.await (2, TimeUnit.SECONDS) / / set the wait timeout System.out.println (Thread.currentThread (). GetName () + ": when notified, I will continue to execute >");} catch (Exception e) {e.printStackTrace ();} finally {System.out.println ("unlock"); lock.unlock ();}} public void singal () {try {System.out.println ("lock"); lock.lock () System.out.println ("I want to notify the waiting thread, condition.signal ()"); condition.signal (); / / await and signal correspond to Thread.sleep (1000);} catch (InterruptedException e) {e.printStackTrace ();} finally {System.out.println ("unlock"); lock.unlock ();} copy code

Run the print log

Lock testing Condition thread: I am waiting for the notification to arrive... lock I want to notify the waiting thread, condition.signal () unlock test Condition thread: wait for the notification, I continue to execute > unlock copy code

The above demonstrates the use of await and signal for Condition, which requires lock first.

3.3 Fair lock and unfair lock

The ReentrantLock constructor passes true to indicate a fair lock.

The fair lock means that the order in which the thread acquires the lock is allocated according to the order in which the thread is locked, that is, on a first-come-first-served basis. The unfair lock is a preemption mechanism of the lock, which obtains the lock randomly, which may cause some threads to be unable to get the lock, so it is unfair.

3.4 ReentrantLock pay attention

ReentrantLock uses lock and unlock to acquire and release locks

Unlock should be placed in the finally, so that normal operation or exception will release the lock.

Before using the await and signal methods of condition, you must call the lock method to get the object monitor

IV. Simultaneous delivery of packages

According to the above analysis, in the case of serious concurrency, using locks is obviously inefficient, because only one thread can acquire the lock at a time, and other threads can only wait.

Java provides concurrent packages to solve this problem, and then introduces some common data structures in concurrent packages.

4.1 ConcurrentHashMap

We all know that HashMap is a thread-unsafe data structure, HashTable is based on HashMap, get method and put method with Synchronized modification become thread-safe, but in high concurrency efficiency, it is finally replaced by ConcurrentHashMap.

ConcurrentHashMap uses segmented locks, and there are 16 buckets by default. Get and put operate. First, calculate hashcode for key, then take the remainder with 16 and fall to one of the 16 buckets, and then add a lock to each bucket (ReentrantLock). In the bucket is the HashMap structure (array plus linked list, the linked list is too long to change to a red-black tree).

Therefore, in theory, a maximum of 16 threads can be accessed at the same time.

4.2 LinkBlockingQueue

A blocking queue with a linked list structure that uses multiple ReentrantLock internally

/ * * Lock held by take, poll, etc * / private final ReentrantLock takeLock = new ReentrantLock (); / * * Wait queue for waiting takes * / private final Condition notEmpty = takeLock.newCondition (); / * * Lock held by put, offer, etc * / private final ReentrantLock putLock = new ReentrantLock (); / * * Wait queue for waiting puts * / private final Condition notFull = putLock.newCondition (); private void signalNotEmpty () {final ReentrantLock takeLock = this.takeLock; takeLock.lock (); try {notEmpty.signal ();} finally {takeLock.unlock () }} / * Signals a waiting put. Called only from take/poll. * / private void signalNotFull () {final ReentrantLock putLock = this.putLock; putLock.lock (); try {notFull.signal ();} finally {putLock.unlock ();}} copy the code

The source code is not too much. Let's briefly talk about the logic of LinkBlockingQueue:

Get data from the queue, and if there is no data in the queue, call notEmpty.await (); enter and wait.

NotEmpty.signal () is called when the data is put into the queue; to notify the consumer that the wait in 1 is over and wake up to continue execution.

NotFull.signal () is called when the data is fetched from the queue; to notify the producer to continue production.

When the put data enters the queue, if it is determined that the data in the queue reaches the maximum value, notFull.await () will be called to wait for the consumer to consume, that is, wait for 3 to fetch the data and issue notFull.signal (); then the producer can continue production.

LinkBlockingQueue is a typical producer-consumer model, so there are few source code details.

4.3Atomic operation class: AtomicInteger

CAS (compare and swap) is used internally to ensure atomicity.

Take an example of int self-increasing.

AtomicInteger atomicInteger = new AtomicInteger (0); atomicInteger.incrementAndGet (); / / self-increment copy code

Look at the source code.

/ * Atomically increments by one the current value. * * @ return the updated value * / public final int incrementAndGet () {return U.getAndAddInt (this, VALUE, 1) + 1;} copy the code

U is Unsafe. Look at Unsafe#getAndAddInt.

Public final int getAndAddInt (Object var1, long var2, int var4) {int var5; do {var5 = this.getIntVolatile (var1, var2);} while (! this.compareAndSwapInt (var1, var2, var5, var5 + var4)); return var5;} copy code

Atomicity is guaranteed by compareAndSwapInt.

Summary

When asked multithreaded concurrency questions in an interview, you can answer like this:

When only one thread is writing and the other threads are reading, you can modify the variable with volatile

When multiple threads write, you can use Synchronized if the concurrency is not serious. Synchronized is not a heavy lock at the beginning, but is biased to a lock when the concurrency is not serious, such as when only one thread accesses it. When multiple threads access it, but not at the same time, the lock is upgraded to a lightweight lock. When multiple threads access at the same time, it is upgraded to a heavy lock. So it is OK to use Synchronized when concurrency is not very serious. However, Synchronized has limitations, such as not being able to set a lock timeout or releasing the lock through code.

ReentranLock can release the lock through code and set the lock timeout.

Under high concurrency, Synchronized and ReentranLock are inefficient, because only one thread can enter the synchronous code block at the same time, and if there are many threads accessing at the same time, then other threads are waiting for the lock. At this time, you can use the data structures under the concurrent package, such as ConcurrentHashMap, LinkBlockingQueue, and atomic data structures such as AtomicInteger.

Thank you for your reading. the above is the content of "how to understand the concurrency problem of multithreading". After the study of this article, I believe you have a deeper understanding of how to understand the concurrency problem of multithreading. the specific use of the situation also needs to be verified by practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report