Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is thread safety in java high concurrency

2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

In this issue, the editor will bring you about what is thread safety in java high concurrency. The article is rich in content and analyzes and describes for you from a professional point of view. I hope you can get something after reading this article.

Definition: when multiple threads access a class, no matter how the runtime environment is scheduled or how these processes will be executed alternately, and no additional synchronization or coordination is required in the tone code, this class can show the correct behavior, so the class is called thread-safe.

Thread safety is reflected in the following three aspects:

Atomicity: provides mutually exclusive access, and only one thread can operate on it at a time.

Visibility: changes made by one thread to main memory can be observed by other threads in time.

Orderability: one thread observes the order of instruction execution in other threads, which is generally disorderly due to the existence of instruction reordering.

Atomic Atomic packet

Create a new test class with the following contents:

@ Slf4j@ThreadSafepublic class CountExample2 {/ / Total number of requests public static int clientTotal = 5000; / / number of threads executing concurrently public static int threadTotal = 200; / / working memory public static AtomicInteger count = new AtomicInteger (0); public static void main (String [] args) throws InterruptedException {/ / thread pool ExecutorService executorService = Executors.newCachedThreadPool () / / define semaphore final Semaphore semaphore = new Semaphore (threadTotal); / / define counter final CountDownLatch countDownLatch = new CountDownLatch (clientTotal); for (int I = 0; I)

< clientTotal; i++) { executorService.execute(() ->

{try {semaphore.acquire (); add (); semaphore.release ();} catch (InterruptedException e) {log.error ("exception", e);} countDownLatch.countDown ();}) } countDownLatch.await (); executorService.shutdown (); log.info ("count: {}", count.get ());} public static void add () {count.incrementAndGet ();}}

The AtomicInteger class is used, the unsafe.getAndAddInt (this valueOffset, 1) + 1; method is used at the bottom of the incrementAndGet method of this class, and the this.compareAndSwapInt method is used at the bottom. This compareAndSwapInt method (CAS) compares the current value with the value of the main memory, and if the value is equal, performs the corresponding operation.

The count variable is the working memory, which is not necessarily the same as the data in the main memory, so you need to synchronize it.

AtomicLong and LongAdder

We decorate the above count with AtomicLong, and we can also output the correct effect:

Public static AtomicLong count = new AtomicLong (0)

Why are we talking about AtomicLong alone? Because there is a new class in JDK8, which is very similar to AtomicLong, that is, the LongAdder class. Implement the above code in LongAdder:

Public static LongAdder count = new LongAdder (); public static void add () {count.increment ();} log.info ("count: {}", count)

You can also output the correct results.

Why add a new LongAdder after you have AtomicLong?

The reason is that the underlying layer of AtomicLong uses CAS to keep synchronization. It constantly tries to compare values in a dead cycle, and performs subsequent operations only when the working memory is consistent with the main memory data. When the competition is not fierce, the probability of success is high. When the competition is fierce, that is, the performance will be degraded when the concurrency is high. For Long and Double variables, jvm splits the read and write operations of 64-bit Long or Double variables into two 32-bit read and write operations. Therefore, you can give priority to using LongAdder instead of continuing to use AtomicLong, and you can continue to use AtomicLong when the competition is low.

View the atomic package:

AtomicReference and AtomicInteger are very similar, except that AtomicInteger encapsulates integers. The bottom layer uses compareAndSwapInt to implement CAS, which compares whether the values are equal, while AtomicReference corresponds to ordinary object references. The bottom layer uses compareAndSwapObject to implement CAS, which compares whether the addresses of two objects are equal. That is, it ensures thread safety when you modify object references.

AtomicIntegerFieldUpdater

Atomicity updates the value of a field in an instance of a class, and the field must be decorated with the volatile keyword and not static.

Private static AtomicIntegerFieldUpdater updater = AtomicIntegerFieldUpdater.newUpdater (AtomicExample5.class, "count"); @ Getter public volatile int count = 100; public static void main (String [] args) {private AtomicExample5 example5 = new AtomicExample5 (); if (updater.compareAndSet (example5 (100,120)) {log.info ("update success 1, {}", example5.getCount ()) } if (updater.compareAndSet (example5, 100,120)) {log.info ("update success 2, {}", example5.getCount ());} else {log.info ("update failed, {}", example5.getCount ());}} ABA problem of AtomicStampReference:CAS

The problem with ABA is that during the CAS operation, other threads change the value of the variable A to B, and then change it to A _ scene CAS will be misled. So the solution to the ABA problem is to add the version number to one. When a variable is modified, the version number of the variable is increased by 1, thus solving the ABA problem.

AtomicBoolean@Slf4j@ThreadSafepublic class AtomicExample6 {private static AtomicBoolean isHappened = new AtomicBoolean (false); / / Total number of requests public static int clientTotal = 5000; / / number of threads executing concurrently public static int threadTotal = 200; public static void main (String [] args) throws InterruptedException {/ / thread pool ExecutorService executorService = Executors.newCachedThreadPool (); / / define semaphore final Semaphore semaphore = new Semaphore (threadTotal) / / define counter final CountDownLatch countDownLatch = new CountDownLatch (clientTotal); for (int I = 0; I

< clientTotal; i++) { executorService.execute(() ->

{try {semaphore.acquire (); test (); semaphore.release ();} catch (InterruptedException e) {log.error ("exception", e);} countDownLatch.countDown ();}) } countDownLatch.await (); executorService.shutdown (); log.info ("isHappened: {}", isHappened);} private static void test () {if (isHappened.compareAndSet (false, true)) {log.info ("excute");}

The test () method of this code will only be executed 5000 times and entering log.info ("excute") will only be executed once, because the isHappened variable will become true after it is executed once.

This method ensures that the variable isHappened from false to true will only be executed once.

This example can solve the problem of allowing a piece of code to be executed only once and never repeat.

Atomic lock

The synchronized:synchronized keyword mainly relies on JVM to implement the lock, so only one thread can operate at the same time within the scope of the keyword object.

Lock: relies on special CPU instructions, and the more representative implementation class is ReentrantLock.

Synchronized synchronization lock

There are four main types of decorations:

Decorated code block: code enclosed in large brackets that acts on the calling object.

Decorating method: the entire method acts on the calling object.

Modify static methods: the entire static method, which acts on all objects of this class.

Modifier class: the part enclosed in parentheses that acts on all objects.

Decorate code blocks and methods

Examples are as follows:

@ Slf4jpublic class SynchronizedExample1 {/ * modifies a code block called synchronous statement block, whose scope is code enclosed in large parentheses and whose object is * / public void test1 () {synchronized (this) {for (int I = 0; I) that calls the code.

< 10; i++) { log.info("test1 - {}", i); } } } public static void main(String[] args) { SynchronizedExample1 synchronizedExample1 = new SynchronizedExample1(); ExecutorService executorService = Executors.newCachedThreadPool(); executorService.execute(() ->

{synchronizedExample1.test1 ();}); executorService.execute (()-> {synchronizedExample1.test1 ();});}}

Why do we use thread pools? If the thread pool is not used, the same method is called twice, which itself is executed synchronously, so it is impossible to verify the specific impact, and after we add the thread pool, it is equivalent to starting two threads to execute the method.

The output is test1 0-9 twice in a row.

If you use the synchronized decorating method:

/ * modifies a method. The modified method is called synchronous method. The scope of action is the whole method, and the object is the object calling the method * / public synchronized void test2 () {for (int I = 0; I).

< 10; i++) { log.info("test2 - {}", i); } } 输出结果跟上面一样,是正确的。 接下来换不同的对象,然后乱序输出,因为同步代码块和同步方法作用对象是调用对象,因此使用两个不同的对象调用不同的同步代码块互相是不影响的,如果我们使用线程池,example1的test1方法和example2的test1方法是交叉执行的,而不是example1的test1执行完然后再执行example2的test1,代码如下: /** * 修饰一个代码块,被修饰的代码称为同步语句块,作用范围是大括号括起来的代码,作用的对象是调用代码的对象 */ public void test1(int flag) { synchronized (this) { for (int i = 0; i < 10; i++) { log.info("test1 - {}, {}", flag, i); } } } public static void main(String[] args) { SynchronizedExample1 synchronizedExample1 = new SynchronizedExample1(); SynchronizedExample1 synchronizedExample2 = new SynchronizedExample1(); ExecutorService executorService = Executors.newCachedThreadPool(); executorService.execute(() ->

{synchronizedExample1.test1 (1);}); executorService.execute (()-> {synchronizedExample2.test1 (2);});}

Therefore, the synchronization code block acts on the current object, and the different calling objects do not affect each other.

Next, test the synchronization method:

/ * modifies a method. The modified method is called synchronous method. The scope of action is the whole method, and the object is the object calling the method * / public synchronized void test2 (int flag) {for (int I = 0; I).

< 10; i++) { log.info("test2 - {}, {}", flag, i); } } public static void main(String[] args) { SynchronizedExample1 synchronizedExample1 = new SynchronizedExample1(); SynchronizedExample1 synchronizedExample2 = new SynchronizedExample1(); ExecutorService executorService = Executors.newCachedThreadPool(); executorService.execute(() ->

{synchronizedExample1.test2 (1);}); executorService.execute (()-> {synchronizedExample2.test2 (2);});}

If the inside of a method is a complete block of synchronization code, like the test1 method above, then it is equivalent to the method decorated with synchronized.

Also note that the synchronized modifier is a method that cannot be inherited to a subclass.

Modify static methods and classes

Let's first test the decorated static method:

/ * modifies a static method, which is called a synchronous method, whose scope is the whole method and whose object is the object calling the method * / public static synchronized void test2 (int flag) {for (int I = 0; I)

< 10; i++) { log.info("test2 - {}, {}", flag, i); } } public static void main(String[] args) { SynchronizedExample2 synchronizedExample1 = new SynchronizedExample2(); SynchronizedExample2 synchronizedExample2 = new SynchronizedExample2(); ExecutorService executorService = Executors.newCachedThreadPool(); executorService.execute(() ->

{synchronizedExample1.test2 (1);}); executorService.execute (()-> {synchronizedExample2.test2 (2);});}

Modifies a static method that acts on all objects of this class. So when we call synchronized-decorated static methods with different objects, only one thread is executing at a time. So the result of the above execution is:

11 test2 31v 37.447 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1,011 31v 37.451 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1,111v 31v 37.451 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1,211 31v 37.451 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1 311 test2 31v 37.451 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1,411 31v 37.451 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1,511 31v 37.451 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1,611 31v 37.451 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1 711 test2 31v 37.451 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1,811 31v 37.451 [pool-1-thread-1] INFO com.vincent.example.sync.SynchronizedExample2-test2-1,911 31v 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2,011 31V 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2 111test2 31v 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2,211 31v 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2,311v 31v 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2,411 31v 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2 511 pool-1-thread-2 31v 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2,611 31v 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2,711 31v 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2,811 31v 37.451 [pool-1-thread-2] INFO com.vincent.example.sync.SynchronizedExample2-test2-2,9

They will not perform alternately.

Then call the decorated class:

/ * modifies a class whose code is called synchronous statement block, whose scope is code enclosed in large parentheses and whose object is called code * / public static void test1 (int flag) {synchronized (SynchronizedExample2.class) {for (int I = 0; I)

< 10; i++) { log.info("test1 - {}, {}", flag, i); } } } public static void main(String[] args) { SynchronizedExample2 synchronizedExample1 = new SynchronizedExample2(); SynchronizedExample2 synchronizedExample2 = new SynchronizedExample2(); ExecutorService executorService = Executors.newCachedThreadPool(); executorService.execute(() ->

{synchronizedExample1.test1 (1);}); executorService.execute (()-> {synchronizedExample2.test1 (2);});}

The running result is the same as above.

Similarly, if a class decorated by synchronized inside a method is a complete block of synchronous code, like the test1 method above, then it is equivalent to a static method decorated with synchronized.

Synchronized: uninterruptible lock, suitable for non-competitive, good readability.

Lock: interruptible lock, diversified synchronization, can maintain normal when the competition is fierce.

Atomic: can maintain normal when the competition is fierce, and the performance is better than lock; only one value can be synchronized.

Visibility

Visibility means that thread changes to main memory can be observed by other threads in a timely manner. When it comes to visibility, we often wonder when it is invisible. Here's why shared variables are not visible between threads.

Thread cross execution

Reordering combined with thread cross execution

The updated value of the shared variable is not updated in time between working memory and main memory.

JMM's two provisions on synchronized:

The latest value of the shared variable must be flushed to main memory before the thread is unlocked.

When a thread is locked, it clears the value of the shared variable in the working memory, so when using the shared variable, you need to re-read the latest value from the main memory (note that locking and unlocking are the same lock)

Visibility-volatile

This is achieved by adding a memory barrier and disabling reordering optimization:

When writing to volatile variables, a store barrier command is added after the write operation to flush the values of the shared variables in the local memory to the main memory.

When reading a volatile variable, a load barrier instruction is added before the read operation to read the shared variable from the main memory.

Every time a volatile variable accesses a thread, it forces the value of the variable to be read from the main memory, and when the variable changes, it forces the thread to refresh the latest value to the main memory, so that different threads can see the latest value of the variable at any time. The following examples are given:

@ Slf4j@NotThreadSafepublic class CountExample4 {/ / Total number of requests public static int clientTotal = 5000; / / number of threads executing concurrently public static int threadTotal = 200; public static volatile int count = 0; public static void main (String [] args) throws InterruptedException {/ / thread pool ExecutorService executorService = Executors.newCachedThreadPool (); / / define semaphore final Semaphore semaphore = new Semaphore (threadTotal) / / define counter final CountDownLatch countDownLatch = new CountDownLatch (clientTotal); for (int I = 0; I

< clientTotal; i++) { executorService.execute(() ->

{try {semaphore.acquire (); add (); semaphore.release ();} catch (InterruptedException e) {log.error ("exception", e);} countDownLatch.countDown ();}) } countDownLatch.await (); executorService.shutdown (); log.info ("count: {}", count);} public static void add () {/ / modified with volatile to guarantee that count is the value count++; in main memory}}

The running result still does not guarantee thread safety. Why?

The reason is that when we implement count++, it actually has three steps, 1. When the count value is taken from the main memory, the count value is up to date. 2 performs a + 1 operation for count, and 3. Write the count value back to main memory. When multiple threads read the value of count at the same time and give the count value + 1, there will be thread unsafe conditions.

So decorating variables by using volatile is not thread-safe. At the same time, it also shows that volatile is not atomic.

Since volatile is not suitable for counting scenarios, what scenarios are suitable for?

Generally speaking, the use of volatile must be independent of the current value for the write operation of the variable.

Atomicity and order

In the java memory model, the compiler and processor are allowed to reorder instructions, but the reordering process does not affect the execution of single-threaded programs, but affects the correctness of multithreaded concurrent execution.

Ordering-happens-before principle

Program order rules: within a thread, according to the code order, the operation written in front of it occurs first in the operation after writing. For program order rules, the execution of a piece of program code appears to be orderly in a single thread (note that although it is mentioned in this rule that the operation written in front occurs first in the operation after writing, this is because the program appears to be executed in the order in which the code is written, while the virtual machine reorders the program code, although it is reordered. But the result of the final execution is the same as that of the sequential execution of the program, which only reorders instructions that do not have data dependent lines, so the program appears to be executed in an orderly manner in a single thread). In fact, this rule is used to ensure the correctness of the results of program execution in a single thread. However, the correctness of the program execution in multithreading can not be guaranteed.

Locking rule: a unlock operation occurs first in a subsequent lock operation on the same lock. That is to say, if the same lock is locked in both single-thread and multi-thread, the lock must be released before the lock operation can continue.

Volatile variable rule: a write to a variable occurs first in a subsequent read operation on that variable. If a thread writes a variable first and then a thread reads it, then the write operation must occur first in the read operation.

Transfer rule: if operation An occurs first in operation B, and operation B occurs in operation C first, it can be concluded that operation An occurs in operation C.

Thread startup principle: the start () method of the Thread object occurs first in every action of this thread.

Thread interrupt rule: the call to the thread interrupt () method occurs first when the code of the interrupted thread detects the occurrence of the interrupt event.

Thread termination rule: all operations in a thread first occur in the termination detection of the thread. We can detect that the thread has terminated execution by means of the Thread.join () method and the return value of Thread.isAlive ().

Object termination rule: the initialization of an object occurs first at the beginning of its finalize () method.

This is what thread safety in java high concurrency is shared by Xiaobian. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report