In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article Xiaobian for you to introduce in detail "Java concurrency and thread safety how to master", the content is detailed, the steps are clear, the details are handled properly, I hope this "Java concurrency and thread safety how to master" article can help you solve your doubts, the following follow the editor's ideas slowly in-depth, together to learn new knowledge.
Why are there multiple threads?
When it comes to multithreading, it's easy to equate with high performance, but that's not the case. To take a simple example, from 1 to 100, using four threads is not necessarily faster than one thread. Because thread creation and context switching is a huge overhead.
So what is the original purpose of designing multithreading? Let's take a look at such a practical example. Computers usually need to interact with people. Suppose the computer has only one thread, and this thread is waiting for input from the user. Then in the process of waiting, CPU can do nothing but wait, resulting in low utilization of CPU. If it is designed to be multithreaded, CPU can be cut to other threads while waiting for resources to improve CPU utilization.
Modern processors mostly contain multiple CPU cores, so for tasks with a large amount of computation, they can be broken down into multiple small tasks in a multi-threaded way to improve the efficiency of computing.
To sum up, there are only two points to improve the utilization rate of CPU and the efficiency of calculation.
The essence of thread safety
Let's first look at an example:
The above is an example of incrementing variables 100 times, but it takes 4 threads, each thread increments 25 times, executes with 4 threads such as CountDownLatch, and prints out the final result. In fact, we want the result of the program to be 100, but the printed result is not always 100.
This leads to the problem described by thread safety, which we first describe in colloquial terms:
Thread safety is about getting the program to run the results we want, or in other words, to make the program execute as we see it.
To explain my summary, we first new an add object and call the object's doAdd method. We had hoped that each thread would increment itself 25 times in an orderly manner, and finally get the correct result. If the program runs as we preset, then the object is thread-safe.
Let's take a look at Brian Goetz's description of thread safety: when multiple threads access an object, the object is thread-safe if it doesn't take into account the scheduling and alternation of these threads in the runtime environment, no additional synchronization, or any other coordinated operation on the caller, and the behavior of calling the object can get the correct results.
Let's analyze why this code doesn't always get the right results.
Java memory model (JMM) data visibility issues, instruction reordering, memory barrier
First of all, starting with the hardware efficiency of the computer, the computing speed of CPU is several orders of magnitude faster than memory. In order to balance the contradiction between CPU and memory, every CPU has cache, even multi-level cache L1, L2 and L3. Then the interaction between cache and memory needs cache consistency protocol, which will not be explained in detail here. Then the interaction between the final processor, cache, and main memory is as follows:
Then Java's memory model (Java Memory Model, referred to as JMM) also defines the relationship among threads, working memory and main memory, which is very similar to the definition of hardware.
By the way, the region partition of Java virtual machine runtime memory
Method area: store class information, constants, static variables, etc., shared by each thread
Virtual machine stack: the execution of each method creates stack frames for storing local variables, Operand stacks, dynamic links, etc. The virtual machine stack mainly stores this information, and threads are private.
Local method stack: Native method services used by virtual machines, such as c programs, etc., thread private
Program counter: a line number counter that records which line the program is running to, which is equivalent to the current thread bytecode. Thread is private.
Heap: instance objects from new are stored in this area, which is the main battlefield of GC and thread sharing.
Therefore, for the main memory defined by JMM, most of the time it can correspond to the areas shared by threads such as heap memory and method area, which is only corresponding conceptually. In fact, program counters, virtual machine stacks and other parts are also placed in the main memory, depending on the design of the virtual machine.
OK, now that we understand the JMM memory model, let's analyze why the above program doesn't get the correct results. Please take a look at the following figure. Threads An and B simultaneously read the count initial value of the main memory and store it in their respective working memory. At the same time, they perform the self-increment operation, write back to the main memory, and finally get the wrong result.
Let's take a closer look at the essential cause of this error:
Visibility, I don't know when the latest value of working memory will be written back to main memory.
Orderly, threads must have orderly access to shared variables. Let's describe this process with the concept of "horizon". From the perspective of thread B, when he sees that the A thread has done the operation, he writes the value back to memory and immediately reads the latest value to do the operation. Thread A should also read and do the operation immediately after seeing that the operation of B is finished, so that it gets the correct result.
Next, let's analyze in detail why it should be limited in terms of visibility and order.
Add the volatile keyword to count to ensure visibility.
Private volatile int count = 0
The volatile keyword, which adds the lock prefix and the lock prefix to the final compiled instruction, does three things.
Prevent instruction reordering (the analysis of this issue is not important here, which will be discussed in more detail later)
Locking the bus or using the locking cache to ensure the atomicity of execution, early processing may be done by locking the bus, so that other processors cannot access memory through the bus, which is expensive. Today's processors use locking cache to solve the problem in conjunction with cache consistency.
Write all the data in the buffer back to the main memory and ensure that the variable cached by other processors is invalid
Now that the visibility is guaranteed and the volatile keyword is added, why can't you get the correct result? the reason is that count++, is not an atomic operation, and count++ is equivalent to the following steps:
Read count from main memory to assign to thread copy variable: temp=count
Thread copy variable plus 1:temp=temp+1
Thread copy variables are written back to main memory: count=temp
Even if it is really strict to lock the bus, only one processor can access the count variable at a time, but other cpu can already access the count variable when performing the operation in step (2). At this time, the latest operation result has not been brushed back to the main memory, resulting in an error result, so the sequence must be guaranteed.
Then the essence of ensuring sequence is to ensure that only one CPU can execute critical section code at a time. At this time, the practice is usually to add a lock, the nature of the lock is divided into two kinds: pessimistic lock and optimistic lock. For example, the typical pessimistic lock synchronized and the typical optimistic lock ReentrantLock under the JUC package.
After reading this, the article "how to master Java concurrency and thread safety" has been introduced. If you want to master the knowledge points of this article, you still need to practice and use it yourself. If you want to know more about related articles, welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.