In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces "thread safety and lock optimization in Java language". In daily operation, I believe that many people have doubts about thread safety and lock optimization in Java language. I have consulted all kinds of materials and sorted out simple and useful operation methods. I hope it will be helpful for you to answer the doubts about thread safety and lock optimization in Java language. Next, please follow the editor to study!
Catalogue
1. Thread safety in Java language
1.1 immutable
1.2 absolute thread safety
1.3 relative thread safety
1.4 Thread compatibility
1.5 Thread antagonism
two。 The realization method of thread safety
2.1 Mutual exclusion synchronization
3. Lock optimization
3.1 spin lock and adaptive spin
3.2 Lock elimination
3.3 Lock coarsening
3.4 lightweight lock
3.5 bias lock
1. Thread safety in Java language
According to the "safety degree" of thread safety, the data shared by various operations in Java language can be divided into the following five categories: immutable, absolute thread safety, relative thread safety, thread compatibility and thread opposition.
1.1 immutable
Immutable objects must be thread-safe, and no matter the method implementation of the object or the caller of the method, there is no need for any thread safety measures.
In Java language, if the data shared by multiple threads is a basic data type, then it can be guaranteed to be immutable as long as it is modified by the final keyword when defining it. If the shared data is an object type, because the Java language does not provide support for value types yet, then it is necessary for the object to ensure that its behavior will not have any impact on its state.
For example: an object instance of the java.lang.String class, which is a typical immutable object. Calling its substring (), replace () and concat () methods will not affect its original value, but will only return a newly constructed string object.
There are many ways to ensure that the behavior of an object does not affect its own state. The simplest one is to declare all variables with state in the object as final, so that after the constructor ends, it is immutable. For example, the constructor of java.lang.Integer in the following code ensures that the state remains unchanged by defining the internal state variable value as final:
/ * The value of the Integer. * @ serial * / private final int value;/** * Constructs a newly allocated Integer object that * represents the specified int value. * * @ param value the value to be represented by the * Integer object. * / public Integer (int value) {this.value = value;}
For the types that meet the immutable requirements in the Java class library API, in addition to the String mentioned above, enumerated types and some subclasses of java.lang.Number, such as numerical wrapper types such as Long and Double, and big data types such as BigInteger and BigDecimal, are commonly used. But atomic classes AtomicInteger and AtomicLong, which are both number subtypes, are mutable.
1.2 absolute thread safety
Java API marks itself as a thread-safe class, most of which are not absolutely thread-safe. Java.util.Vector is a thread-safe container. Add (), get (), size () and other methods are all modified by synchronized to ensure atomicity, visibility and order. However, even if all methods are modified by synchronized, that doesn't mean you don't need synchronization when calling it.
The following code tests the thread safety of Vector:
Private static Vector vector = new Vector (); public static void main (String [] args) {while (true) {for (int I = 0; I)
< 10; i++) { vector.add(i); } Thread removeThread = new Thread(new Runnable() { @Override public void run() { for (int i = 0; i < vector.size(); i++) { vector.remove(i); } } }; Thread printThread = new Thread(new Runnable() { @Override public void run() { for (int i = 0; i < vector.size(); i++) { System.out.println((vector.get(i))); } } }); removeThread.start(); printThread.start(); //不要同时产生过多的线程,否则会导致操作系统假死 while (Thread.activeCount() >20);}}
The running results are as follows:
Exception in thread "Thread-132" java.lang.ArrayIndexOutOfBoundsException:
Array index out of range: 17
At java.util.Vector.remove (Vector.java:777)
At org.fenixsoft.mulithread.VectorTest$1.run (VectorTest.java:21)
At java.lang.Thread.run (Thread.java:662)
Although the Vector get (), remove (), and size () methods used here are all synchronous, in a multithreaded environment, it is still not safe to use this code without additional synchronization measures on the method caller side. Because if another thread deletes an element at exactly the wrong time, causing the sequence number I to be no longer available, accessing the array with I will throw an ArrayIndexOutOfBoundsException exception. To ensure that this code executes correctly, we have to change the definitions of removeThread and printThread to look like this:
Thread removeThread = new Thread (new Runnable () {@ Override public void run () {synchronized (vector) {for (int I = 0; I < vector.size ()) {vector.remove (I);}) Thread printThread = new Thread (new Runnable () {@ Override public void run () {synchronized (vector) {for (int I = 0; I < vector.size ()) {System.out.println ((vector.get (I);})
If Vector must be absolutely thread-safe, it must maintain a consistent set of snapshot access within it, and each change to the elements in it will generate a new snapshot, which costs a lot of time and space.
1.3 relative thread safety
Relative thread safety is what we usually call thread safety, it needs to ensure that a single operation on this object is thread safe, and we do not need additional safeguards when we call it. However, for some consecutive calls in a specific order, it may be necessary to use additional synchronization means on the calling side to ensure the correctness of the call.
In the Java language, most classes that claim thread safety belong to this type, such as Vector, HashTable, the collection wrapped by Collections's synchronizedCollection () method, and so on.
1.4 Thread compatibility
Thread compatibility means that the object itself is not thread-safe, but it can be used safely in the concurrent environment by using synchronization means correctly on the calling side. This is usually the case when we say that a class is not thread-safe. Most of the classes in the Java class library API are thread compatible, such as the collection classes ArrayList and HashMap corresponding to the previous Vector and HashTable.
1.5 Thread antagonism
Thread opposition means that code cannot be used concurrently in a multi-threaded environment, regardless of whether the caller takes synchronization measures or not. Because the Java language inherently supports multithreading, thread opposition to this kind of code that excludes multithreading is rare and usually harmful and should be avoided as much as possible.
two。 Thread Safety implementation method 2.1 Mutual exclusion synchronization
Mutually exclusive synchronization (Mutual Exclusion & Synchronization) is one of the most common and most important means to guarantee the correctness of concurrency. Synchronization means that when multiple threads access shared data concurrently, only one thread can use the shared data at the same time. Mutex is a means to achieve synchronization, and critical section (Critical Section), mutex (Mutex) and semaphore (Semaphore) are common mutual exclusion methods. Therefore, in the four words "mutually exclusive synchronization", mutual exclusion is the cause, synchronization is the result; mutual exclusion is the method, synchronization is the goal.
In Java, the most basic means of mutex synchronization is the synchronized keyword, which is a Block Structured synchronization syntax. After the synchronized keyword is compiled by Javac, two bytecode instructions, monitorenter and monitorexit, are formed before and after the synchronization block. Both bytecode instructions require a parameter of type reference to indicate the object to be locked and unlocked.
If the synchronized in the Java source code explicitly specifies the object parameter, then use the reference to this object as the reference;. If it is not explicitly specified, it will decide whether to replace the object instance of the code or take the Class object corresponding to the type as the lock to be held by the thread according to the method type modified by synchronized (such as instance method or class method).
When executing the monitorenter instruction, you first try to acquire the lock on the object. If the object is not locked, or if the current thread already holds a lock on that object, the value of the lock counter is increased by one, and the value of the lock counter is reduced by one when the monitorexit instruction is executed. Once the counter's value is zero, the lock is released. If the acquisition of the object lock fails, the current thread should be blocked and wait until the object requesting the lock is released by the thread that holds it.
From the above description, some of the characteristics of the synchronized lock can be obtained:
Synchronous blocks decorated by synchronized are reentrant to the same thread. This means that the same thread repeatedly enters the synchronous block without locking itself. The synchronized block modified by synchronized blocks unconditionally blocks the entry of other subsequent threads until the thread holding the lock finishes executing and releases the lock. This means that you cannot force a thread that has acquired a lock to release the lock as it does with locks in some databases, nor can it force a thread that is waiting for a lock to interrupt waiting or time out.
From the perspective of execution cost, synchronized holding locks is a Heavy-Weight operation in the Java language. In the mainstream Java virtual machine implementation, the thread of Java is mapped to the native kernel thread of the operating system, so if you want to block or wake up a thread, you need the help of the operating system to complete it, which will inevitably fall into the transition from user state to kernel state, which takes a lot of processor time.
In fact, the virtual machine itself makes some optimizations for synchronized locking operations, such as adding a spin wait before notifying the operating system to block the thread to avoid frequently cutting into the kernel mindset (discussed in the next section of this issue).
3. Lock Optimization 3.1 spin Lock and Adaptive spin
Spin lock: that is, when a thread competes for a shared resource, the resource is already occupied by another thread, and the thread does not immediately enter the pending blocking state, but waits for the right to use the shared resource to be released by the locked thread by constantly spinning. Of course, it is impossible for a thread to spin and wait all the time, and there must be a limit to the time it takes. If the spin exceeds the limit number of times and still fails to acquire the lock, the thread should be suspended in the traditional way. The default value for the number of spins is 10, and the user can also use the parameter-XX:PreBlockSpin to change the row.
In the optimization of spin lock in JDK 6, adaptive spin is introduced. Adaptation means that the spin time is no longer fixed, but is determined by the spin time of the previous time on the same lock and the state of the lock's owner. If, on the same lock object, spin wait has just successfully acquired the lock, and the thread holding the lock is running, the virtual machine will assume that the spin is likely to succeed again, allowing the spin wait to last relatively longer, such as 100 busy cycles. On the other hand, if the spin rarely succeeds in acquiring the lock for a lock, it is possible to omit the spin process directly to avoid wasting processor resources when acquiring the lock later.
3.2 Lock elimination
Lock elimination: when the virtual machine just-in-time compiler is running, it requires synchronization for some code, but eliminates locks that have been detected to be impossible to compete for shared resources.
The main judgment of lock elimination comes from the data support of escape analysis. if it is judged that in a piece of code, all the data on the heap will not escape and be accessed by other threads, they can be treated as data on the stack. it is considered that they are private to threads, so synchronous locking is naturally unnecessary.
Examples are as follows: (example 2-1)
Public String concatString (String S1, String S2, String S3) {return S1 + S2 + S3;}
Because String is an immutable class, string concatenation is always done by generating new String objects, so the Javac compiler automatically optimizes String concatenation.
Before JDK 5, string addition was translated into continuous append () operations on StringBuffer objects, and in JDK 5 and later versions, into continuous append () operations on StringBuilder objects.
Optimized as follows: (example 2-2)
Public String concatString (String S1, String S2, String S3) {StringBuffer sb = new StringBuffer (); sb.append (S1); sb.append (S2); sb.append (S3); return sb.toString ();}
Do you still think this code doesn't involve synchronization? Each StringBuffer.append () method has a synchronization block, and the lock is the sb object. The virtual machine observes the variable sb, and after escape analysis, it is found that its dynamic scope is limited to the inside of the concatString () method. That is, all references to sb never escape outside the concatString () method, and other threads cannot access it, so there are locks here, but they can be safely eliminated. Locks are still added during interpretation execution, but after immediate compilation by the server compiler, this code ignores all synchronization measures and executes directly.
Objectively speaking, since when it comes to lock elimination and escape analysis, the virtual machine cannot be a pre-JDK 5 version, so it will actually be converted to a non-thread-safe StringBuilder to complete string concatenation without locking. But this does not prevent the author from using this example to prove the universality of synchronization in Java objects.
3.3 Lock coarsening
In principle, when writing code, it is always recommended to keep the scope of synchronization blocks as small as possible-synchronizing only in the actual scope of shared data, in order to minimize the number of operations that need to be synchronized. Even if there is lock competition, the thread waiting for the lock can get the lock as quickly as possible.
In most cases, the above principles are correct, but if a series of consecutive operations repeatedly lock and unlock the same object, or even lock operations occur in the body of the loop, even if there is no thread competition, frequent mutually exclusive synchronization operations can also lead to unnecessary performance loss.
This is the case with the continuous append () method shown in example 2-2. If the virtual machine detects that such a series of piecemeal operations lock the same object, the scope of locking synchronization will be extended (coarsening) to the outside of the entire operation sequence, for example (example 2-2), it is extended to before the first append () operation until after the last append () operation, so it only needs to be locked once.
In short, if the virtual machine detects that such a series of piecemeal operations lock the same object, the scope of locking synchronization will be extended (coarsening) to the outside of the entire sequence of operations.
3.4 lightweight lock
Lightweight lock is a new lock mechanism added in JDK 6. The "lightweight" in its name is relative to the traditional lock implemented by operating system mutexes, so the traditional locking mechanism is called "heavyweight" lock. Lightweight locks are not used to replace heavy locks, but are designed to reduce the performance consumption caused by the use of operating system mutexes in traditional heavyweight locks without multithread competition.
Because the object header information is an additional storage cost independent of the data defined by the object itself, considering the space efficiency of the Java virtual machine, Mark Word is designed as a non-fixed dynamic data structure to store as much information as possible in a very small space. It reuses its own storage space according to the state of the object.
For example, in a 32-bit HotSpot virtual machine, when the object is not locked, 25 bits out of the 32 bit space of Mark Word will be used to store the object hash code, 4 bits will be used to store the object's generational age, 2 bits will be used to store lock flag bits, and 1 bit will be fixed to 0 (this indicates that it is not in biased mode).
In addition to the normal state in which the object is not locked, there are several different states, such as lightweight locking, heavyweight locking, GC marking, biasability, and so on. The storage content of the object header in these states is shown in the following table:
When the code is about to enter the synchronization block, if the synchronization object is not locked (the lock flag bit is "01"), the virtual machine will first establish a space called lock record (Lock Record) in the stack frame of the current thread, which is used to store the current Mark Word copy of the lock object. At this time, the state of the thread stack and object header is shown in the following figure:
The virtual machine will then use the CAS operation to attempt to update the object's Mark Word to a pointer to Lock Record. If this update action succeeds, it means that the thread has a lock on the object, and the lock flag bit of the object Mark Word (the last two bits of the Mark Word) will be changed to "00", indicating that the object is in a lightweight locked state. At this point, the state of the thread stack and object header is shown in the following figure:
If this update operation fails, it means that at least one thread competes with the current thread to acquire the lock for the object. The virtual machine will first check whether the Mark Word of the object points to the stack frame of the current thread. If so, it means that the current thread already owns the lock of the object, then go directly to the synchronization block to continue execution, otherwise the lock object has been preempted by other threads. If more than two threads contend for the same lock, the lightweight lock is no longer valid and must be expanded to a heavyweight lock, and the state value of the lock flag becomes "10". At this time, the pointer to the heavyweight lock (mutex) is stored in the Mark Word, and the thread waiting for the lock must also enter the blocking state.
What is described above is the locking process of a lightweight lock, and its unlocking process is also done through the CAS operation. If the object's Mark Word still points to the thread's lock record, replace the object's current Mark Word and the copied Displaced Mark Word in the thread with the CAS operation. If the replacement is successful, the entire synchronization process is completed successfully; if the replacement fails, another thread has tried to acquire the lock, waking up the suspended thread while releasing the lock.
Lightweight locks can improve program synchronization performance on the basis of the rule of thumb that there is no competition for most locks during the whole synchronization cycle. If there is no competition, the lightweight lock successfully avoids the overhead of using mutexes through CAS operations; but if there is lock competition, there is an additional overhead of CAS operations in addition to the cost of mutexes themselves. Therefore, in the case of competition, lightweight locks are slower than traditional heavyweight locks.
3.5 bias lock
Bias lock is also a lock optimization measure introduced in JDK 6, which aims to eliminate the synchronization primitive of data without competition and further improve the running performance of the program. If lightweight locks use CAS operations to eliminate mutexes used in synchronization without competition, then biased locks eliminate entire synchronization without competition, not even CAS operations.
The "bias" in the lock is the "bias" of favouritism and favoritism. What it means is that the lock will be biased towards the thread that acquired it first, and if the lock has not been acquired by another thread during the rest of execution, the thread that holds the biased lock will never need to synchronize again.
Assuming that the current virtual machine has enabled biased locks (enable parameter-XX:+UseBiased Locking, which is the default value for HotSpot virtual machines since JDK 6), when the lock object is first acquired by the thread, the virtual machine will set the flag bit in the object header to "01" and the bias mode to "1" to indicate that it is in biased mode. At the same time, use the CAS operation to record the ID of the thread that acquired the lock in the Mark Word of the object. If the CAS operation is successful, each time the thread holding the lock bias enters the lock-related synchronization block, the virtual machine can no longer perform any synchronization operations (such as locking, unlocking and updating the Mark Word, etc.).
As soon as another thread tries to acquire the lock, the bias pattern ends immediately. According to whether the lock object is currently in a locked state, it is decided whether to undo the bias (the bias mode is set to "0"), and after undoing, the flag bit returns to the state of unlocked (flag bit is "01") or lightweight lock (flag bit is "00"). Subsequent synchronization operations are performed in accordance with the lightweight locks described above. The relationship between the state transition of biased locks, lightweight locks, and object Mark Word is shown in the following figure:
At this point, the study on "thread safety and lock optimization in the Java language" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.