In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "what are the knowledge points of Java concurrent programming". In daily operation, I believe that many people have doubts about the knowledge points of Java concurrent programming. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the questions of "what are the knowledge points of Java concurrent programming?" Next, please follow the editor to study!
To kill a developer, you only need to change the requirements three times.
The application of 2.1-volatile (wall la tai l or wall lei tai l)
It ensures the "visibility" of shared variables in multiprocessor development. Visibility means that when a thread
When a shared variable is modified, another thread can read the modified value, which does not cause thread context switching and scheduling.
CPU term definition
How does volatile ensure visibility? Let's get the JIT compiler generated by the X86 processor through the tool
Assemble instructions to see what CPU does when writing to volatile.
The Java code is as follows.
Instance = new Singleton (); / / instance is the volatile variable
Convert it to assembly code, as follows.
0x01a3de1d: movb $0 × 0,0 × 1104800 (% esi); 0x01a3de24: lock addl $0 × 0, (% esp)
Instructions with the Lock prefix cause two things in multicore processors [1].
1) write the data of the current processor cache row back to system memory.
2) this write-back operation invalidates the data that has the memory address cached in other CPU.
Optimizing the use of 2.volatile
Doug lea, a famous master of Java concurrent programming, added a queue collection class Linked- to the concurrent package of JDK 7.
TransferQueue, which optimizes queue dequeuing and queuing performance by appending bytes when using the volatile variable
Yes. The code for LinkedTransferQueue is as follows.
/ * * head node in queue * / tail node in private transient f?inal PaddedAtomicReference head;/** queue * / private transient f?inal PaddedAtomicReference tail;static f?inal class PaddedAtomicReference extends AtomicReference T > {/ / append to 64 bytes Object p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, pa, pb, pc, pd, pe; PaddedAtomicReference (Tr) {super (r) with many 4-byte references }} public class AtomicReference implements java.io.Serializable {private volatile V value;// omits other code}
Add words to save energy and optimize performance? This approach may seem amazing, but if you have a deep understanding of the processor architecture, you can understand the mystery. Let's first look at the LinkedTransferQueue class, which uses an inner class type to define the head and tail of the queue, while the inner class PaddedAtomicReference does only one thing relative to the parent class AtomicReference, which appends the shared variable to 64 bytes. We can calculate that the reference to an object is 4 bytes, and it appends 15 variables (60 bytes in total), plus the parent class's value variable, a total of 64.
Bytes.
Why can adding 64 bytes improve the efficiency of concurrent programming? Because the cache lines of L1, L2, or L3 caches for Intel Core i7, Core, Atom, and NetBurst, and Core Solo and Pentium M processors are 64 bytes wide, partially populated cache lines are not supported, which means that if the header and tail nodes of the queue are less than 64 bytes, the processor will read them into the same cache line, and each processor will cache the same header and tail nodes under multiprocessors. When a processor tries to modify the header node, it will lock the entire cache line, so under the action of cache consistency mechanism, other processors will not be able to access the tail node in their own cache. on the other hand, the queuing and dequeuing operations of the queue need to constantly modify the head node and tail node, so the efficiency of queuing and dequeuing will be seriously affected in the case of multiprocessors. Doug lea fills the cache line of the cache buffer by appending to 64 bytes, preventing the header node and the tail node from being loaded into the same cache line, so that the header and tail nodes do not lock each other when modified.
So should it be appended to 64 bytes when using the volatile variable? No. It should not be in two scenarios.
In this way.
The cache line is not a 64-byte processor. Such as the P6 series and Pentium processors, their L1 and L2 cache lines are 32 bytes wide.
Shared variables are not written frequently. Because the use of appending bytes requires the processor to read more bytes into the high-speed buffer, this itself will bring some performance consumption, and the probability of locking is very small if the shared variables are not written frequently. There is no need to avoid locking each other by appending bytes.
However, this way of appending bytes may not work under Java 7, because Java 7 becomes smarter and it eliminates or rearranges useless fields, requiring the use of other ways of appending bytes.
2.2.1Java object header
The locks used by synchronized are stored in the header of the Java object. If the object is an array type, the virtual machine stores the object header with 3 words wide (Word), and if the object is not an array type, the object header is stored with 2 words wide. In a 32-bit virtual machine, 1 word width equals 4 bytes, or 32bit
The HashCode, generational age, and lock marks of the object stored by default in the Mark Word in the header of the Java object
2.2.2 upgrade and comparison of locks
In Java SE 1.6, there are four states of lock, and the levels from low to high are: no lock state, partial lock state, and lightweight lock state.
State and weight lock state
Locks can be upgraded, but not degraded
1. Bias lock (Biased Locking)
Personal understanding: biased, eccentric. If another thread preempts the lock during operation, the thread holding the biased lock will be suspended, and the biased lock identity will be set.
In most cases, locks are always acquired by the same thread multiple times, and there is no multithreading competition, so biased locks occur. The goal is to improve performance when only one thread executes synchronized blocks of code.
The revocation of the lock and the closure of the lock will incur overhead
Applicable scenarios for biased locks
There is always only one thread executing the synchronization block. Before it finishes releasing the lock, no other thread executes the synchronization block. If there is no competition in the lock, it will be upgraded to a lightweight lock. When it is upgraded to a lightweight lock, the bias lock needs to be undone, and the cancellation of the bias lock will lead to stop the word operation.
two。 Lightweight lock
The biased lock runs when a thread enters the synchronous block, and when the second thread joins the lock contention, the biased lock is upgraded to a lightweight lock
(1) lightweight lock
Before the thread executes the synchronization block, JVM creates a space in the stack frame of the current thread to store lock records, and
Copy the Mark Word in the object header to the lock record, officially called Displaced Mark Word. Then the thread tries to use the
CAS replaces the Mark Word in the object header with a pointer to the lock record. If successful, the current thread acquires the lock, if lost
Failed, indicating that other threads are competing for the lock, and the current thread attempts to use spin to acquire the lock.
(2) lightweight lock unlock
When lightweight unlocking, the atomic CAS operation is used to replace the Displaced Mark Word back to the object header, if the
Work means that there is no competition. If it fails, it indicates that there is a competition in the current lock, and the lock will expand to a heavy lock.
Comparison of advantages and disadvantages of locks
Heavyweight lock Synchronized
Make a special record.
When acting on a static method, the Class instance is locked, and because the relevant data of the Class is stored in the permanent band PermGen (jdk1.8 is metaspace), the persistent band is globally shared, so the static method lock is equivalent to a global lock of the class, locking all threads calling the method.
From the JVM specification, we can see the implementation principle of Synchonized in JVM. JVM implements method synchronization and code block synchronization based on entering and exiting Monitor objects, but the implementation details are different. Code block synchronization is implemented using monitorenter and monitorexit instructions, while method synchronization is implemented in a different way
Summary
The following excerpts from blogs
Https://blog.csdn.net/xiaobaixiongxiong/article/details/100933396
When all locks are enabled, the thread will first acquire the bias lock when it enters the critical area. If there is already a bias lock, it will try to acquire the lightweight lock. If both of the above two types fail, the spin lock is enabled. If the spin does not acquire the lock, the heavy lock is used, and the thread that does not acquire the lock is blocked and suspended until the thread holding the lock wakes them up after executing the synchronization block.
The biased lock is used when there is no lock contention, that is, before the current thread finishes execution, no other thread will execute the synchronized block. Once there is contention from the second thread, the biased lock will be upgraded to a lightweight lock, and if there is more than two threads contending, it will be upgraded to a heavy lock.
If thread contention is fierce, then bias locks should be disabled.
Lock optimization
The locks described above are not controllable in our code, but with reference to the above ideas, we can optimize the locking operations of our own threads.
Reduce lock time
If the code does not need to be executed synchronously, if it can not be executed in the synchronization fast, it should not be put in the synchronization fast. The lock can be released as soon as possible.
Reduce the granularity of the lock
Its idea is to split a physical lock into multiple logical locks to increase the degree of parallelism so as to reduce lock competition. Its idea is to trade space for time.
-excerpt completed.
Spin lock VS adaptive spin lock
Blocking or waking up a CPU thread requires the operating system to switch between Java states, which takes processor time. If the content in the synchronized code block is too simple, the state transition may take longer than the user code execution time.
In many scenarios, the locking time of synchronous resources is very short, and for this short period of time to switch threads, the cost of thread suspending and restoring the scene may outweigh the loss of the system. If the physical machine has multiple processors and allows two or more threads to execute in parallel at the same time, we can make the later thread requesting the lock not give up the execution time of the CPU and see if the thread holding the lock will release the lock soon.
In order to make the current thread "wait a minute", we need to make the current thread spin. If the thread that locked the synchronization resource before the spin completes has released the lock, then the current thread can obtain the synchronization resource directly without blocking, thus avoiding the overhead of switching threads. This is the spin lock.
Spin lock itself is flawed, it is not a substitute for blocking. Spin waiting avoids the overhead of thread switching, but it takes up processor time. If the lock is occupied for a short time, the effect of spin waiting will be very good. On the other hand, if the lock is occupied for a long time, the spinning thread will only waste processor resources. Therefore, the spin wait time must be limited, and if the spin exceeds the limit number of times (the default is 10 times, which can be changed using-XX:PreBlockSpin), you should suspend the thread if the lock is not successfully acquired.
The realization principle of spin lock is also that the do-while loop in the source code that calls unsafe for self-increment operation in CAS,AtomicInteger is a spin operation. If you fail to modify the value, you will perform the spin through the loop until the modification is successful.
Spin locks are introduced in JDK1.4.2 and are turned on using-XX:+UseSpinning. In JDK 6, it is turned on by default, and adaptive spin lock (adaptive spin lock) is introduced.
Adaptation means that the spin time (number of times) is no longer fixed, but is determined by the spin time on the same lock and the state of the lock's owner. If, on the same lock object, spin wait has just successfully acquired the lock, and the thread holding the lock is running, the virtual machine will think that the spin is likely to succeed again, which in turn will allow the spin wait to last relatively longer. If spin is rarely successfully acquired for a lock, it is possible to omit the spin process when trying to acquire the lock later, blocking the thread directly and avoiding a waste of processor resources.
There are three other common forms of spin lock: TicketLock, CLHlock and MCSlock. This article only introduces nouns and does not explain them in depth. Interested students can consult the relevant materials on their own.
-the above comes from Meituan's technical blog.
Please explain the value of biased locks to synchronized and ReentrantLock?
Biased lock?
Useful for synchronize
Java biased lock (Biased Locking) is a multithreaded optimization introduced by Java6.
Bias lock, as the name implies, it will favor the first thread to access the lock, if the synchronization lock is accessed by only one thread, and there is no multi-thread contention, then the thread does not need to trigger synchronization, in this case, a bias lock is added to the thread.
If another thread preempts the lock while running, the thread holding the biased lock is suspended, and JVM removes the biased lock from it and restores the lock to the standard lightweight lock.
It further improves the running performance of the program by eliminating synchronization primitives without resource competition.
ReentrantLock has implemented biased locks
Synchronized is actually reentrant, but at the jvm level.
At this point, the study of "what are the knowledge points of Java concurrent programming" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.