In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "how to understand JMM memory model". In the operation process of actual cases, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
1. computer memory model
CPU in the execution, there must be data, and data in the memory, where the memory is the computer's physical memory, initially OK, but with the development of technology, CPU processing speed is faster and faster, and from the memory to read and write data process and CPU execution speed will be greater and greater gap, said designer, between the physical memory and CPU, added the concept of cache: That is, when the CPU is running, it will copy the data needed for the operation from the main memory to the CPU cache, so the CPU can directly read data from its cache and write data to it. After the operation is over, the data in the cache will be refreshed to the main memory.
(At this point, you can imagine the problems with high concurrency and multithreading.)
A single-core CPU contains only one set of L1, L2, and L3 caches; if the CPU contains multiple cores, i.e., a multi-core CPU, each core contains one set of L1 (and even L2) caches, while sharing L3 (and L2) caches.
When your calculation is:
Single CPU, single thread. The core cache is accessed by only one thread. Cache exclusive, no access conflicts and other issues.
Single CPU, multithreaded. Multiple threads in the process will access shared data in the process at the same time. After the CPU loads a block of memory into the cache, different threads will map to the same cache location when accessing the same physical address. In this way, even if thread switching occurs, the cache will not be invalidated. But since only one thread can be executing at any time, cache access conflicts do not occur.
Multi-core CPU, multi-thread. Each core has at least one L1 cache. Multiple threads access a shared memory in a process, and each thread executes on a different core, each core keeps a buffer of shared memory in its caehe. Since multicore can be parallel, there may be multiple threads writing their own caches at the same time, and the data between the respective caches may be different.
2. JMM memory model
JMM is called Java Memory Model java memory model, it is just a set of specifications, and does not really exist,(look, just a set of specifications, a definition), it describes a rule. This set of specifications defines how variables in a program are accessed. This solves the problem caused by multi-core CPU multithreading (think about why it is multi-core CPU multithreading).
JMM specifies working memory, main memory, and main memory is a shared area, all threads can access, working memory is the working memory of each thread, each thread of variable operations must be carried out in its own working memory. At runtime, variables are copied from main memory to working memory, variables are operated on, and variables are written back to main memory after operation. Variables cannot be operated directly in main memory. Again: this is a specification, a java defined set of runtime specifications for programs. See this, isn't it found with computer memory models especially like
Java memory area
Here again java memory area: heap, stack, method area, local method area, program counter
Method area: Thread sharing area, mainly used to store class information, constants, static variables and other data loaded by virtual machines.
Heap: thread sharing area, created when the virtual machine starts, mainly used to store instances of objects, so it is also an area where java garbage collection is most frequent.
Stack: thread private area, created at the same time as the thread, the number of stacks is equal to the number of threads, defined by stack frame, when executing each method, a stack frame will be created to store the method information: operand stack, dynamic link method, return value, return address and other information, each method execution from the call to the end, corresponding to the stack frame into the stack.
Program counters, local method stack we do not care about, interested please learn by themselves.
If you see this, I think you should wonder, when JMM is said, what will be the memory model of the computer, JAVA memory area are described, you will know later.
Here I would like to emphasize again that the main memory and working memory of the jmm memory model are not at the same level as the heap, stack, method area, etc. of the java memory area, and there is no analogy. One is norms and rules, which are equivalent to school rules. If you want to map, it's like you want stack---working memory, heap, method area---main memory.
And now I'm going to connect the dots:
First of all, jmm is a set of specifications proposed by java, so java must also conform to this set of specifications when it is designed. I think Java memory area is designed according to the JMM specification, so the above said, they do not belong to a level, can not be compared, can only be said to be Java memory area, in line with the JMM specification.
Then we also know that thread is the smallest unit of CPU scheduling, that is to say, the CPU kernel execution thread, as mentioned above, the execution method, that is, the process of pushing and pulling a stack frame, then the CPU kernel execution thread, that is, the data in the stack of operations, then the CPU kernel copies the data in the stack to its cache to run. It may be a bit vague, but I think you just remember that data at cpu execution is data on the stack.
You might be thinking, what's going on with all that data in there? I think this is the case: see this, I also think that the computer in front of you know that the heap in the stack is stored in an address, then the cpu kernel in the run, is not also based on this address to find the data, for the cpu, you are a piece of data, in front of it you and the stack of data are the same, are data, and then added to its cache.
If you read it a little, please read it a few times. Read the book a hundred times, and see its righteousness: that is to say, when you read it for the first time, you don't understand it at all. You only read the words in your mind, and you don't understand the meaning of writing. For example: I say chopsticks, your subconscious mind is chopsticks and sticks, not "chopsticks."
Then, please review the two problems mentioned above. I also posted a piece of code here to express the problems caused by multicore multithreading:
public class VolatileFaceThread{ boolean isRunning = true; void m() { System.out.println("isRunning start"); while(isRunning) { } System.out.println("isRunning end"); } public static void main(String\[\] args) { VolatileFaceThread vft = new VolatileFaceThread(); new Thread(vft :: m).start(); try { TimeUnit.SECONDS.sleep(1); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } vft.isRunning = false; System.out.println("update isRunning... "); }}
Expected effect: new threads will loop forever
This code, the newly started thread, will loop on and on without stopping. When testing this code, if the actual effect is that after the main thread is modified, the newly started thread also stops, then your computer may be running 1 core. (I was stuck here for days, and when I let my colleagues run it, it actually worked as expected.)
This is because two threads are run by two kernels, which read values into their own cache. The cache is private to each kernel, and the main thread modifies the value, which is invisible to the new thread, so the new thread will always loop.
That how to solve it, is to make threads visible, java has this keyword: volatile----memory visibility, prohibit instruction reordering.
A variable modified by the volatile keyword is always visible to all threads, i.e., when a thread modifies the value of a variable modified by the volatile keyword, the new value is always immediately known to other threads.
"How to understand the JMM memory model" content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.