In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Today, the editor will share with you the relevant knowledge of what JMM defines. The content is detailed and the logic is clear. I believe most people still know too much about this knowledge, so share this article for your reference. I hope you can get something after reading this article. Let's take a look at it.
JMM is the Java memory model (java memory model). Because there are some differences in memory access in different hardware manufacturers and different operating systems, it will cause a variety of problems when the same code runs on different systems. Therefore, the java memory model (JMM) shields the memory access differences of various hardware and operating systems, so that java programs can achieve consistent concurrency on various platforms.
The Java memory model specifies that all variables are stored in main memory, including instance variables, static variables, but excluding local variables and method parameters. Each thread has its own working memory, and the thread's working memory keeps a copy of the variables and main memory used by the thread, and the thread operates on the variables in the working memory. Threads cannot read and write variables in main memory directly.
Variables in each other's working memory cannot be accessed between different threads. The transfer of variable values between threads needs to be done through main memory.
If it sounds abstract, I can draw you a picture that will be more intuitive:
The working memory of each thread is independent, and the thread can only operate the data in the working memory and then brush it back to the main memory. This is the basic way threads work as defined by the Java memory model.
As a reminder, some people here will misinterpret the Java memory model as Java memory structure, and then answer to heap, stack, GC garbage collection, which is a far cry from the question the interviewer wants to ask. In fact, the general question about the Java memory model is to ask questions related to multithreading and Java concurrency.
Interviewer: what does JMM define?
This is simple, the entire Java memory model is actually built around three features. They are: atomicity, visibility, order. These three characteristics are the basis of the whole Java concurrency.
Atomicity
Atomicity means that an operation is indivisible and uninterruptible, and a thread will not be disturbed by other threads during execution.
The interviewer wrote a piece of code with a pen. can the following codes guarantee atomicity?
Int I = 2nint j = ibot I = I + 1
The first sentence is a basic type assignment operation, which must be an atomic operation.
The second sentence reads the value of I first, and then assigns it to j. Two-step operation cannot guarantee atomicity.
The third and fourth sentences are actually equivalent, first reading the value of I, then + 1, and finally assigning to I, three-step operation, which can not guarantee atomicity.
JMM can only guarantee basic atomicity. To ensure the atomicity of a code block, two bytecode instructions, monitorenter and moniterexit, are provided, that is, the synchronized keyword. Therefore, operations between synchronized blocks are atomic.
Visibility
Visibility means that when one thread modifies the value of a shared variable, other threads can immediately know that it has been modified. Java uses the volatile keyword to provide visibility. When a variable is modified by volatile, it is immediately flushed to main memory, and when other threads need to read the variable, it reads the new value in main memory. Ordinary variables do not guarantee this.
In addition to the volatile keyword, final and synchronized also achieve visibility.
The principle of synchronized is that shared variables must be synchronized to main memory before entering unlock after execution.
Final-modified fields, once initialized, are visible to other threads if no object escapes (meaning that the object can be used by other threads for initialization).
Order
In Java, you can use synchronized or volatile to ensure the order of operations between multiple threads. There are some differences in implementation principles:
The volatile keyword uses a memory barrier to disable instruction reordering to ensure orderliness.
The principle of synchronized is that after a thread lock, it must be unlock before other threads can re-lock, so that the code blocks wrapped by synchronized are executed serially between multiple threads.
Interviewer: tell me about eight kinds of memory interoperability.
OK, interviewer, there are 8 kinds of memory interoperability. Let me draw a picture to show you:
Can volatile guarantee thread safety?
To come to the conclusion, volatile does not necessarily guarantee thread safety.
How to prove it? let's take a look at the running result of the following code:
/ * * @ author Ye Hongzhi official account: java technology enthusiasts * * / public class VolatileTest extends Thread {private static volatile int count = 0 Vector threads public static void main (String [] args) throws Exception {Vector threads = new Vector (); for (int I = 0; I)
< 100; i++) { VolatileTest thread = new VolatileTest(); threads.add(thread); thread.start(); }//等待子线程全部完成for (Thread thread : threads) { thread.join(); }//输出结果,正确结果应该是1000,实际却是984System.out.println(count);//984}@Overridepublic void run() {for (int i = 0; i < 10; i++) {try {//休眠500毫秒Thread.sleep(500); } catch (Exception e) { e.printStackTrace(); } count++; } }} 为什么volatile不能保证线程安全? 很简单呀,可见性不能保证操作的原子性,前面说过了count++不是原子性操作,会当做三步,先读取count的值,然后+1,最后赋值回去count变量。需要保证线程安全的话,需要使用synchronized关键字或者lock锁,给count++这段代码上锁: private static synchronized void add() { count++;}禁止指令重排序 首先要讲一下as-if-serial语义,不管怎么重排序,(单线程)程序的执行结果不能被改变。 为了使指令更加符合CPU的执行特性,最大限度的发挥机器的性能,提高程序的执行效率,只要程序的最终结果与它顺序化情况的结果相等,那么指令的执行顺序可以与代码逻辑顺序不一致,这个过程就叫做指令的重排序。 重排序的种类分为三种,分别是:编译器重排序,指令级并行的重排序,内存系统重排序。整个过程如下所示:Instruction reordering is not a problem in a single thread, does not affect the execution result, and improves performance. However, in a multithreaded environment, there is no guarantee that the execution result will not be affected.
Therefore, in a multithreaded environment, instruction reordering needs to be disabled.
The volatile keyword forbids instruction reordering has two meanings:
When the program performs the read or write operation of the volatile variable, all the changes in the previous operation must have been made, and the result has been visible to the later operation, and the subsequent operation must not have been carried out.
When you optimize an instruction, you cannot execute statements that are accessed to volatile variables behind them, nor statements that follow volatile variables.
Here's an example:
Private static int a change / non-volatile modification variable private static int / non-volatile modification variable private static volatile int kthorp volatile modification variable private void hello () {a = 1; / statement 1b = 2; / / statement 2k = 3; / / statement 3a = 4; / / statement 4b = 5; / / statement 5jump / the following omission.}
The variable a _ volatile b is not modified by volatile, while k is decorated with volatile. Therefore, statement 3 cannot be placed before statements 1 and 2, nor after statements 4 and 5. But the order of statements 1 and 2 cannot be guaranteed, and by the same token, statements 4 and 5 cannot guarantee the order.
Moreover, when the statement 3 is executed, the statement 1 ~ 2 is definitely completed, and the execution result of the statement 1 ~ 2 is visible to the statement 3 ~ 4 ~ 5.
What is the principle that volatile prohibits instruction reordering?
First of all, let's talk about memory barriers. Memory barriers can be divided into the following categories:
LoadLoad barrier: Load1,LoadLoad,Load2 for such a statement. Before the data to be read by the Load2 and subsequent read operations are accessed, ensure that the data to be read by the Load1 is read.
StoreStore barrier: for such statements Store1, StoreStore, Store2, ensure that Store1 writes are visible to other processors before Store2 and subsequent write operations are performed.
LoadStore barrier: for such statements Load1, LoadStore,Store2, make sure that the data to be read by Load1 is read before the Store2 and subsequent write operations are brushed out.
StoreLoad barrier: for such statements Store1, StoreLoad,Load2, make sure that Store1 writes are visible to all processors before Load2 and all subsequent read operations are performed.
Insert the LoadLoad barrier after each volatile read operation and the LoadStore barrier after the read operation.
Insert a StoreStore barrier in front of each volatile write operation and a SotreLoad barrier behind it.
These are all the contents of the article "what does JMM define?" Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.