In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "the principle of java visibility, atomicity and ordering in concurrent scenarios". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "the principle of java visibility, atomicity and ordering in concurrent scenarios".
Source 1: visibility issues caused by caching
In the single-core era, all threads are executed on a single CPU, and the data consistency between CPU cache and memory can be easily resolved. Because all threads operate on the same CPU cache, writing to the cache by one thread must be visible to the other thread. For example, in the following figure, both thread An and thread B operate on the cache in the same CPU, so thread A updates the value of variable V, then thread B accesses variable V and must get the latest value of V (the value written by thread A).
Changes made by one thread to a shared variable can be seen immediately by another thread, which we call visibility.
In the multi-core era, each CPU has its own cache, so the data consistency between CPU cache and memory is not so easy to solve. When multiple threads execute on different CPU, these threads operate on different CPU caches. For example, in the following figure, thread An operates on the cache on CPU-1, while thread B operates on the cache on CPU-2. It is obvious that thread A's operation on variable V is not visible to thread B. This belongs to the "hole" that hardware programmers dig for software programmers.
Let's use another piece of code to verify the visibility problem in a multi-core scenario. The following code loops 10000 count+=1 operations each time the add10K () method is executed. In the calc () method, we create two threads, each calling the add10K () method once. Let's think about what the result of executing the calc () method should be.
Public class Test {private long count = 0; private void add10K () {int idx = 0; while (idx++
< 10000) { count += 1; } } public static long calc() { final Test test = new Test(); // 创建两个线程,执行 add() 操作 Thread th2 = new Thread(() ->{test.add10K ();}); Thread th3 = new Thread (()-> {test.add10K ();}); / / start two threads th2.start (); th3.start () / / wait for two threads to finish executing th2.join (); th3.join (); return count;}}
Intuition tells us that it should be 20000, because when the add10K () method is called twice in a single thread, the value of count is 20000, but in fact the execution result of calc () is a random number between 10000 and 20000. Why? We assume that thread An and thread B start to execute at the same time, then the count=0 is read into their respective CPU cache for the first time, and after count+=1 is executed, the value in their CPU cache is 1. At the same time, after writing to memory, we will find that there is 1 in memory instead of 2 as we expected. Later, because both CPU caches have the value of count, and both threads are calculated based on the value of count in the CPU cache, the final value of count is less than 20000. This is the visibility problem of caching.
If you loop 10000 count+=1 operations instead of 100 million times, you will find that the effect is more obvious, and the final value of count is close to 100 million instead of 200 million. If you loop 10000 times, the value of count is close to 20000, because two threads are not started at the same time, and there is a time difference.
Source 2: atomicity problems caused by thread switching
Because IO is too slow, the early operating system invented multi-process, even on a single-core CPU, we can listen to songs while writing Bug, which is the credit of multi-process.
The operating system allows a process to execute for a short period of time, such as 50 milliseconds, and after 50 milliseconds the operating system will re-select a process to execute (we call it "task switching"), which is called a "time slice".
Java concurrent programs are based on multithreading, which naturally involves task switching, which you may not expect is also one of the sources of weird Bug in concurrent programming. Most of the time for task switching is at the end of the time slice, and now we basically use high-level language programming. A statement in a high-level language often requires multiple CPU instructions. For example, count + = 1 in the above code requires at least three CPU instructions.
Instruction 1: first, you need to load the variable count from memory into the register of CPU
Instruction 2: after that, perform the + 1 operation in the register
Instruction 3: finally, the result is written to memory (the caching mechanism makes it possible to write to the CPU cache rather than memory).
The operating system does task switching, which can happen after any CPU instruction is executed, yes, it is a CPU instruction, not a statement in a high-level language. For the above three instructions, let's assume that count=0, if thread A makes a thread switch after instruction 1 is executed, and thread An and thread B execute according to the sequence shown in the figure below, then we will find that both threads have performed the operation of count+=1, but the result is not the 2 we expected, but 1.
We subconsciously feel that the operation of count+=1 is an indivisible whole, just like an atom, thread switching can happen before count+=1 or after count+=1, but it just doesn't happen in the middle. We refer to the property that one or more operations are not interrupted during CPU execution as atomicity.
Source 3: the ordering problem caused by compilation optimization
Orderliness means that the program executes in the order of the code. In order to optimize performance, the compiler sometimes changes the order of statements in the program. For example, in the program, the compiler may change the order of statements to "baked 6 / 6;". In this case, the compiler adjusts the order of statements but does not affect the final result of the program. However, sometimes optimizations of compilers and interpreters can lead to unexpected Bug.
A classic case in the Java domain is to create a singleton object (DCL singleton pattern) using double checking, such as the following code: in the method of getting the real example getInstance (), we first determine whether instance is empty, if so, lock Singleton.class and check again whether instance is empty, and if it is still empty, create an instance of Singleton.
Public class Singleton {static Singleton instance; static Singleton getInstance () {if (instance = = null) {synchronized (Singleton.class) {if (instance = = null) instance = new Singleton ();}} return instance;}}
Everything looks perfect and unassailable, but in fact the getInstance () method is not perfect. What is the problem? As for the new operation, we think that the new operation should be:
Allocate a piece of memory M
Initialize the Singleton object on memory M
Then the address of M is assigned to the instance variable.
But in fact, the optimized execution path looks like this:
Allocate a piece of memory M
Assign the address of M to the instance variable
Finally, the Singleton object is initialized on memory M.
So what problems will be caused by optimization? Look at the picture below. I'm sure you'll understand!
Thank you for your reading. The above is the content of "the principle of java visibility, atomicity and ordering in concurrent scenarios". After the study of this article, I believe you have a deeper understanding of the principle of java visibility, atomicity and ordering in concurrent scenarios, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.