In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "the basis of java concurrent programming". In daily operation, I believe many people have doubts about the basic problems of java concurrent programming. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "the basis of java concurrent programming". Next, please follow the editor to study!
Why does CPU multi-level cache need CPU cache?
The reason is that the frequency of CPU is too fast to keep up with main memory, so CPU often needs to wait for main memory during the processor clock cycle, wasting resources. So the emergence of cache is to alleviate the speed mismatch between CPU and memory.
What is the point of CPU caching?
Time locality: if a data is accessed, it is likely to be accessed again in the near future.
Spatial locality: if some data is accessed, the data adjacent to it may soon be accessed as well.
CPU multi-level cache-cache consistency (MESI)
Used to ensure the consistency of cached shared data between multiple CPU cache
CPU multi-level cache-out-of-order execution optimization
An optimization made by a processor that violates the original order of the code in order to improve execution speed. There is no problem in the case of a single core, but problems can occur in a multi-core environment.
JAVA memory Model (Java Memory Model,JMM)
This is a specification that regulates how the Java virtual machine and computer memory work together, specifying how and when a thread can see other threads modify the values of shared variables and how to access shared variables synchronously.
As shown in the figure above, the java memory model requires that the call stack and local variables are stored on the online stack, such as Thread Stack in the figure, and objects are stored in the heap (Heap). The local variable Local variable2 is a reference to the object, and the Local variable is stored on the thread stack, but the object is stored on the heap. Objects stored on the heap can be accessed by threads holding references to the object and can access the object's member variables, and when two threads access the same method on the object3 at the same time, each thread will access the member variables in the method, so each thread will have a private copy of the object's member variables.
The communication between threads must go through main memory, for example, if thread A wants to communicate with thread B, thread A needs to refresh a copy of the shared variable into main memory, and then thread B reads the shared variable updated by thread An in main memory. As a result, synchronization problems occur.
There are often eight operations for synchronization:
Lock (locking): a variable that acts on main memory and identifies a variable as a thread exclusive state.
Unlock (unlock): a variable that acts on main memory and releases a variable in a locked state so that the released variable can be locked by other threads.
Read (read): a variable acting on the main memory that transfers a variable value from the main memory to the thread's working memory for subsequent load actions to use
Load (load): a variable that acts on working memory and places the variables obtained by the read operation from the main memory into a copy of the variables in the working memory.
Use (use): a variable that acts on working memory and passes a variable value in working memory to the execution engine.
Assign (assignment): a variable that acts on working memory that assigns a value received by an execution engine to a variable in working memory.
Store (storage): a variable that acts on working memory and transfers the value of a variable in working memory to main memory for subsequent write operations.
Write (write): a variable that acts on main memory and transfers store operations from the value of a variable in working memory to a variable in main memory.
Synchronization rules:
If you want to copy a variable from main memory to working memory, you need to perform read and load operations sequentially; if you synchronize variables from working memory back to main memory, you need to perform store and write operations sequentially. However, the java memory model only requires that the above operations must be performed sequentially, but there is no guarantee that they must be performed continuously.
One of the read and load, store and write operations is not allowed to appear alone.
A thread is not allowed to discard its most recent assign operation, that is, variables must be synchronized to main memory after being changed in working memory.
A thread is not allowed to synchronize data from working memory back to main memory for no reason (no assign operation has occurred).
A new variable can only be born in main memory, and it is not allowed to use an uninitialized variable (load or assign) directly in working memory. That is, before you can perform use and store operations on a variable, you must perform assign and load operations.
A variable allows only one thread to perform lock operations on it at a time, but the lock operation can be executed repeatedly by the same thread. After executing lock many times, the variable will be unlocked only if the unlock operation is performed the same number of times. Lock and unlock must appear in pairs.
If you lock a variable, the value of the variable is emptied in working memory, and the value of the variable needs to be initialized by a load or assign operation before the execution engine can use it.
If a variable is not locked by a lock operation in advance, it is not allowed to perform a unlock operation on it, nor is it allowed to unlock a variable that is locked by another thread.
Before you can unlock a variable, you must first synchronize the variable to main memory (perform store and write operations)
Advantages and risks of concurrency
Advantages:
Speed: multiple requests can be processed at the same time, the response is faster; complex operations can be divided into multiple processes at the same time.
Design: programming is simpler in some cases, and there can be more choices.
Resource utilization: CPU can do other things while waiting for IO.
Risk:
Security: when multiple threads share data, it may produce results that do not match expectations.
Activity: activity problems occur when an operation cannot continue. Such as deadlock, hunger and so on.
Performance: too many threads will lead to frequent CPU switching, increased scheduling time, synchronization mechanism, and excessive memory consumption.
At this point, the study of "the basics of java concurrent programming" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.