Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to master Java concurrency

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces "how to master Java concurrency". In daily operation, I believe many people have doubts about how to master Java concurrency. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts of "how to master Java concurrency"! Next, please follow the editor to study!

1 、 HashMap

The first question in the interview is HashMap, which tests the basic skills of Javaer. Don't ask why you put it here, because it is important! HashMap has the following characteristics:

Access to HashMap is out of order.

KV is allowed to be NULL.

This class is safe in the case of multithreading, so you can consider using HashTable.

The bottom layer of JDk8 is array + linked list + red-black tree, and the bottom layer of JDK7 is array + linked list.

The initial capacity and loading factor are the key points that determine the performance of the whole class, so don't move easily.

HashMap is created like a slob and will only be build when you put data.

When an one-way linked list is converted to a red-black tree, it will first change to a two-way linked list and finally into a red-black tree. Keep in mind that two-way linked lists and red-black trees coexist.

For the two incoming key, it will be forced to determine the level, in order to decide whether to place the data to the left or right.

After the linked list is converted to a red-black tree, efforts will be made to merge the root node of the red-black tree and the head node of the linked list with the table [I] node.

When deleting, it is necessary to determine whether the number of red-black trees of the deleted node needs to be transferred to the linked list. If the linked list is not changed, it is similar to RBT. Find a suitable node to fill the deleted node.

The root node of the red-black tree is not necessarily the same as the table [I], that is, the header node of the linked list, and the synchronization of the three is achieved by MoveRootToFront. HashIterator.remove () will movable=false when calling removeNode.

Common HashMap test sites:

HashMap principle, internal data structure.

The general process of put, get and remove in HashMap.

The implementation of hash function in HashMap.

How to expand HashMap.

Why several important parameters of HashMap are set in this way.

Why HashMap is not thread-safe and how to replace it.

The difference between HashMap in JDK7 and JDK8.

Switch between linked list and red-black tree in HashMap.

The generation principle of HashMap ring in JDK7.

2 、 ConcurrentHashMap

ConcurrentHashMap is a commonly used concurrency container in multithreaded mode, and its implementation is quite different from JDK8 in JDK7.

2.1 JDK7

ConcurrentHashMap in JDK7 uses Segment + HashEntry segmented locks to achieve concurrency. Its disadvantage is that the degree of concurrency is determined by the number of Segment arrays. Once the concurrency is initialized, it is only the expansion of HashEntry.

Segment inherits from ReentrantLock and plays the role of lock here. It can be understood that each of our Segment is a HashMap that implements the Lock function. If we have multiple Segment at the same time to form a Segment array, then we can achieve concurrency.

The general put process is as follows:

The underlying implementation of 1.ConcurrentHashMap?

ConcurrentHashMap allows multiple modification operations to be carried out concurrently, and the key lies in the use of lock separation technology. It uses multiple locks to control changes to different parts of the hash table. Internal use of Segment to represent these different parts, each segment is actually a small HashTable, as long as multiple modification operations occur on different segments can be carried out concurrently.

How does 2.ConcurrentHashMap ensure that the acquired elements are up-to-date in the case of concurrency?

HashEntry used to store key-value pairs of data, its member variables value and next are designed to be of type volatile, which ensures that other threads modify the value value, which can be seen immediately by the get method, and does not need to be locked when get.

The weak consistency of 3.ConcurrentHashMap is reflected in the clear and get methods because it is not locked.

For example, when the iterator traverses the data, it is traversed by a Segment and a Segment. If there happens to be a thread inserting data on the Segment that has just been traversed when traversing a Segment, it will reflect inconsistency. It's the same with clear. Both the get method and the containsKey method traverse all the nodes on the corresponding index bit and judge them without locking. If they are modified, the latest values can be obtained directly because of the existence of visibility, but if they are newly added values, they can not be consistent.

The number of 4.size statistics is not accurate

Size method is more interesting, first unlocked statistics of all the data to see whether the two before and after the data is the same, if the same data is returned, if not, then all the segment should be locked, counted, unlocked. And the size method simply returns a statistical number.

2.2 JDK8

ConcurrentHashMap abandoned the segmented lock in JDK8 and switched to CAS + synchronized. At the same time, it changed HashEntry to Node, and added the implementation of red-black tree, mainly depending on the process of put (if you see the expansion of this piece, you can definitely blow it).

What if ConcurrentHashMap is to achieve efficient concurrency security?

1. Read operation

The get method does not use a synchronization mechanism at all, nor does it use the unsafe method, so read operations are supported for concurrent operations.

two。 Write operation

The basic idea is similar to the write operation of HashMap, except that CAS + syn is used to lock, and the expansion operation is also involved. The lock in JDK8 has been refined to table [I]. If the array location is different, it can be concurrently. If the location is the same, you can help expand the capacity.

3. Synchronization is mainly done through the hardware-level atomicity of syn and unsafe.

When we operate on a table [I], we lock it with syn.

When fetching data, you use unsafe hardware-level instructions to directly get the latest data of the specified memory.

3. Basic knowledge of concurrency

The starting point of concurrent programming: making full use of CPU computing resources, multithreading is not necessarily faster than single thread, otherwise why the core operation instruction of Redis6.0 version is still single thread? Right?!

Multi-threaded and single-threaded performance needs specific task-specific analysis, talk is cheap, show me the picture.

3.1 processes and threads

Process:

The process is the smallest unit called by the operating system and the independent unit of resource allocation and scheduling by the system.

Thread:

Because the creation, destruction and switching of processes cause a lot of time and space overhead, the number of processes should not be too large, and threads are smaller basic units that can run independently than processes. It is an entity of processes and the smallest unit of CPU scheduling. Threads can reduce the time and space overhead of program concurrent execution, and make the operating system have better concurrency.

Threads basically do not own system resources, only some resources that are essential to the runtime, such as program counters, registers and stacks, while processes occupy heaps and stacks. Thread, Java has two threads, main and GC by default. Java does not have the permission to open threads and cannot operate the hardware. The start0 method of native is implemented by C++.

3.2 parallel and concurrency

Concurrency:

Concurrency: multi-threaded operation of the same resource, single-core CPU extremely fast switch to run multiple tasks

Parallel:

Parallelism: multiple CPU are used at the same time, and CPU multicore is truly executed at the same time.

3.3 several states of the thread

There are six types of thread states in Java:

1. Initial (New):

A new thread object has been created, but the start () method has not been called yet.

two。 Operable (Runnable):

Call the start () method of the thread, which enters the ready state. The ready state just means that you are qualified to run, and if the scheduler does not give you CPU resources, you will always be ready.

The current thread sleep () method ends, other threads join () ends, waiting for the user to finish typing, one thread gets the object lock, and these threads will also enter the ready state.

The current thread time slice is used up, the yield () method of the current thread is called, and the current thread enters the ready state.

The thread in the lock pool gets the object lock and enters the ready state.

3. Running (Running)

A thread in the ready state becomes a running state (running) after obtaining a CPU time slice. This is also the only way for a thread to get into a running state.

4. Blocking (Blocked):

Blocking state is the state of a thread blocking when it enters a method or block of code modified by the synchronized keyword (acquiring a lock).

5. Wait (Waiting) and timeout wait (Timed_Waiting):

Threads in this state are not allocated CPU execution time, they have to wait to be explicitly awakened (notified or interrupted), otherwise they will be waiting indefinitely.

Threads in this state are not allocated CPU execution time, but they do not have to wait indefinitely to be explicitly awakened by other threads, but they will wake up automatically after a certain amount of time.

6. Termination (Terminated):

The thread will be terminated when the normal operation of the thread ends or is interrupted abnormally. Once a thread is terminated, it cannot be revived.

PS:

The thread that calls obj.wait needs to get the monitor,wait of obj first and releases the monitor of obj and enters the waiting state. So wait () / notify () is used in conjunction with synchronized.

In fact, threads from the blocking / waiting state to the runnable state all involve synchronization queues and waiting queues, which is mentioned in AQS.

3.4. The difference between blocking and waiting

Blocking:

When a thread attempts to acquire an object lock (not a lock in the JUC library, that is, synchronized), and the lock is held by another thread, the thread enters a blocking state. It is characterized by simple use, where the JVM scheduler decides to wake itself up without the need for another thread to explicitly wake itself up without responding to interrupts.

Wait for:

When one thread waits for another thread to notify the scheduler of a condition, the thread enters a waiting state. It is characterized by the need to wait for another thread to wake itself up explicitly, which is flexible, more semantic, and responsive to interrupts. For example, call: Object.wait (), * * Thread.join () * *, and wait for Lock or Condition.

Although both the Lock in synchronized and JUC implement the lock function, the thread enters a different state. Synchronized puts the thread into a blocking state, while Lock in JUC uses park () / unpark () to block / wake up, which puts the thread into a waiting state. Although the state is different when waiting for the lock, but after being awakened, they all enter the Runnable state, which is the same from the point of view of the behavior effect.

3.5 the difference between yield and sleep

Both yield and sleep can pause the current thread without releasing lock resources. Sleep can specify a specific sleep time, while yield relies on the time slice division of CPU.

The sleep method gives other threads a chance to run regardless of the thread's priority, so it gives low-priority threads a chance to run. The yield method only gives threads of the same priority or higher a chance to run.

Call the sleep method to put the thread into the waiting state, waiting for the sleep time to reach, while calling our yield method, the thread will enter the ready state, that is, the sleep needs to wait for the set time before it will be ready, and the yield will immediately enter the ready state.

The sleep method declaration throws an InterruptedException, while the yield method does not declare any exceptions

Yield cannot be interrupted, while sleep can accept interrupts.

Sleep method has better portability than yield method (related to operating system CPU scheduling)

3.6 the difference between wait and sleep

1. Different sources

Wait from Object,sleep, from Thread.

two。 Whether to release the lock

Wait release lock, sleep does not release

3. Scope of use

Wait must be in the synchronization code block, and sleep can be used at will

4. Catch exception

Wait does not need to catch exceptions, sleep needs to catch exceptions

3.7 implementation of multithreading

Inherit Thread and implement run method

Implement the run method in the Runnable interface, and then wrap it in Thread. Thread is a thread object, Runnable is a task, and a thread must be an object when it starts.

Implement Callable interface, FutureTask packaging interface, Thread packaging FutureTask. The difference between Callable and Runnable is that Callable's call method has a return value, which can throw an exception, and Callable has a cache.

Implemented through thread pool calls.

This is achieved through the annotation @ Async of Spring.

3.8 deadlock

Deadlock means that two or more threads hold the resources needed by each other. Because of the characteristics of some locks, such as syn, a thread holds a resource, or acquires a lock. Before the thread releases the lock, other threads will not get the lock and will wait forever, so this leads to a deadlock.

Interviewer: explain to me what a deadlock is, and then I'll hire you.

Interviewee: send Offer first, send Offer. I'll explain what deadlock is.

Production conditions:

Mutex condition: a resource, or a lock, can only be occupied by one thread. When a thread first acquires the lock, no other thread can acquire the lock until the thread releases the lock.

Possess and wait: when a thread has acquired a lock and then acquired another lock, the acquired lock will not be released even if it is not acquired.

Inalienable condition: no thread can force a lock that is already occupied by another thread

Loop wait condition: thread A holds thread B's lock and thread B holds thread A. no, no, no.

Check:

1. Jps-l locates the process number

2. Deadlock problem found by jstack process number

Avoid:

Locking order: threads are locked in the same order.

Time-limited locking: the thread has a certain amount of time in the process of acquiring the lock, and if it can't be acquired within a given time, forget it, it requires some API of Lock.

4 、 JMM

4.1 Origin of JMM

With the rapid development of CPU, memory and disk, their access speeds vary greatly. In order to speed up, L1, L2 and L3 caches are introduced. In the future, the program runs to obtain data is the following steps.

This increases speed but leads to cache consistency problems and memory visibility problems. At the same time, the compiler and CPU have also introduced instruction rearrangement to speed up. The general meaning of instruction rearrangement is that the code you write will run the results according to the logic you see, but the system inside JVM is intelligent and will be sorted at an accelerated pace.

1. Compiler-optimized reordering: the compiler can rearrange the execution order of statements without changing the semantics of single-threaded programs.

2. Reordering of instruction-level parallelism: modern processors use instruction-level parallelism to rearrange without affecting data dependence.

3. Reordering of memory systems: processors use caching and read / write buffer processes to rearrange.

The mechanism of instruction rearrangement will lead to the problem of ordering, and the communication and synchronization between threads are often involved in concurrent programming, generally speaking, visibility, atomicity and ordering. The underlying layers of these three problems are cache consistency, memory visibility and ordering.

Atomicity: atomicity means that the operation is indivisible. No matter it is multi-core or single-core, it has atomic quantity, and only one thread can operate on it at a time. Operations that are not interrupted by the thread scheduler during the entire operation can be considered atomic. For example, a = 1.

Visibility: when multiple threads access the same variable, one thread modifies the value of the variable, and other threads can see the modified value immediately. Java to ensure visibility can be considered to be achieved through volatile, synchronized, final.

Orderliness: the order of program execution is in the order of code, and Java is guaranteed by volatile and synchronized.

In order to ensure the correctness of shared memory (visibility, order, atomicity), the memory model defines the specification of the read and write behavior of multithreaded programs in shared memory mode, which is the JMM model. Note that JMM is only a contractual concept and a mechanism and specification used to ensure consistent results. It acts on the process of data synchronization between working memory and main memory, and specifies how and when to do data synchronization.

In JMM, there are two rules:

All operations on shared variables by a thread must be done in its own working memory and cannot be read or written directly from main memory.

Variables in the working memory of other threads cannot be accessed between different threads, and the transfer of variable values between threads needs to be completed through the main memory.

To achieve visibility of shared variables, you must go through the following two steps:

Flush the updated shared variables in local memory 1 to main memory.

Update the value of the latest shared variable in main memory to local memory 2.

At the same time, people put forward three concepts of memory barrier, happen-before and af-if-serial to ensure the visibility, atomicity and order of the system.

4.2 memory barrier

The memory barrier (Memory Barrier) is a CPU instruction that controls reordering and memory visibility issues under certain conditions. The Java compiler also prohibits reordering according to the rules of the memory barrier. The Java compiler inserts memory barrier instructions where the instruction sequence is generated to prohibit the reordering of certain types of processors, allowing the program to execute as expected. It has the following functions:

Ensure the order in which specific operations are performed.

Memory visibility that affects some data (or the result of the execution of an instruction).

The memory barrier is used in volatile, which is described in detail in the volatile section.

4.3 happen-before

Because the existence of instruction rearrangement can make it difficult to understand the internal running rules of CPU, JDK uses the concept of happens-before to explain the memory visibility between operations. In JMM, if the result of one operation needs to be visible to another operation, then there must be a happens-before relationship between the two operations. The happens-before of CPU can be guaranteed without any synchronization means.

Program order rules: each operation in a thread, happens-before any subsequent operations in that thread.

Monitor lock rule: unlock a lock, which is subsequently locked by happens-before.

Volatile variable rule: write to a volatile domain, happens-before any subsequent reads to that volatile domain.

Transitivity: if A happens-before B and B happens-before C, then A happens-before C.

Start () rule: if thread An executes the operation ThreadB.start () (starts thread B), then thread A's ThreadB.start () operation happens-before any operation in thread B.

Join () rule: if thread An executes the operation ThreadB.join () and returns successfully, then any operation happens-before in thread B returns successfully from the ThreadB.join () operation on thread A.

Thread interrupt rule: the call to the thread interrupt method happens-before detects the occurrence of the interrupt event in the code of the interrupted thread.

4.4 af-if-serial

Af-if-serial means that no matter how much reordering (compilers and processors to improve parallelism), the execution results of programs in a single-threaded environment cannot be changed and must be correct. This semantics allows programmers in a single-threaded environment to worry about reordering and memory visibility issues.

5 、 volatile

The introduction of the volatile keyword can guarantee the visibility of variables, but it cannot guarantee the atomicity of variables, such as axiom +. Here is actually related to the knowledge of JMM, Java multithreaded interaction is achieved by sharing memory. When we read and write volatile variables, we have the following rules:

When you write a volatile variable, JMM flushes the value of the shared variable locally corresponding to the thread to main memory.

When reading a volatile variable, JMM sets the local memory for that thread to be invalid. The thread then reads the shared variable from the main memory.

Volatile uses the memory barriers mentioned above, and there are currently four kinds of memory barriers:

StoreStore barrier to ensure that normal writes do not reorder with volatile writes

StoreLoad barrier to ensure that volatile writes and subsequent possible volatile reads and writes will not be reordered

LoadLoad barrier, which forbids volatile reading and subsequent normal read reordering.

LoadStore barrier, which forbids volatile reads and subsequent normal write reordering

Volatile principle: when writing with shared variables modified by volatile variables, the Lock prefix instruction provided by CPU is used. The functions at the CPU level are as follows:

Writes the data of the current processor cache row back to system memory.

This write-back operation will tell you to re-share the memory the next time you use it when the variables you get are invalid in other CPU.

6. Singleton mode DCL + volatile

6.1 Standard singleton mode

High-frequency test point singleton mode: private the constructor of the class, and then set aside only a static Instance function for external callers to call. The general standard method of singleton pattern is DCL + volatile:

Public class SingleDcl {private volatile static SingleDcl singleDcl; / / ensure the visibility private SingleDcl () {} public static SingleDcl getInstance () {/ / put into the lock code, first determine whether the if (singleDcl = = null) {/ / class lock may appear that the AB thread is stuck here, A gets the lock, B waits for the lock. Synchronized (SingleDcl.class) {if (singleDcl = = null) {/ / if the A thread is initialized, then complicate the variables to the resident thread through vloatile. / / if there is no singleDel = = null at this time, the new statement singleDcl = new SingleDcl ();} return singleDcl;} will be executed again after judging that the B process comes in.

6.2 Why is it decorated with Volatile

Without Volatile, there may be instruction rearrangement at run time, which will cause the thread to assign a value to the instance variable as soon as the execution order is 1mi-> 2m-> 4 at runtime, and then perform constructor initialization. The problem is that if thread 2 enters and finds instance! = null before the constructor initializes the execution, it directly gives the thread two semi-finished products, and after joining the volatile, the underlying layer will use the memory barrier to force it to execute as you think.

The singleton model is almost a required test for an interview, and generally has the following characteristics:

Lazy style: instantiate objects only when they are needed. The correct way to implement them is Double Check + Lock + volatile, which solves the problems of concurrency security and poor performance, and requires very high memory, so use lazy writing.

Hungry Chinese style: the singleton object has been created when the class is loaded, and the object can be returned directly when getting the singleton object. Hungry Chinese style is not required for high memory, because it is simple and easy to make mistakes, and there are no concurrency security and performance problems.

Enumerations: Effective Java also enumerates the use of enumerations, which are streamlined, have no thread safety issues, and prevent singletons from being broken during reflection and deserialization within the Enum class.

7. Thread pool

7.1 five minutes to understand thread pool

Lao Wang is a front-line programmer who works hard in Didu. He earns some money after a hard year. He wants to deposit the money in a bank card. When he takes the money to the bank, he encounters the following experience.

After picking up the number at the door of Lao Wang Bank, I found that there was a counter for business ing, but no one handled the business directly.

After Lao Wang took the number, he found that someone was handling it at the counter, waiting for the table to have space, so he went to sit and wait for it to go.

After Lao Wang took the number, he found that all the counters were handled and the waiting seats were full. at this time, the bank manager saw that Lao Wang was an honest man with an attitude of caring for honest people and opened a new temporary window for him.

After taking the number, Lao Wang found that the counters were full, the waiting seats were full, and the temporary window was also full. At this time, the bank manager gave a number of solutions.

There are too many people to tell you directly that I won't handle it for you.

Adopt the mode of cold violence, neither handle nor let him go.

The manager asked Lao Wang to try to talk to the front person in the seat to see if he could be plugged, he could handle it, but he couldn't be kicked out.

The manager directly told Lao Wang who sent you. I can't handle it with whom you call.

The above process is almost similar to that of the JDK thread pool, with seven parameters:

Number of core thread pools corresponding to the three windows in the business: corePoolSize

The total number of business windows of the bank corresponds to: maximumPoolSize

The opening temporary window will be closed if no one handles it for how long: keepAliveTime

Temporary window inventory time unit: TimeUnit

The waiting seat in the bank is the waiting queue: BlockingQueue

ThreadFactory this parameter is a thread factory in JDK and is used to create thread objects that are generally immobile.

When unable to handle it, the bank gives the corresponding solution: RejectedExecutionHandler

When the task cache queue of the thread pool is full and the number of threads in the thread pool reaches maximumPoolSize, a task rejection policy will be adopted if any task arrives. Generally, there are four rejection strategies:

ThreadPoolExecutor.AbortPolicy: discards the task and throws a RejectedExecutionException exception.

ThreadPoolExecutor.CallerRunsPolicy: this task is rejected by the thread pool and is executed by the thread that calls the execute method.

ThreadPoolExecutor.DiscardOldestPolicy: discard the first task in the queue, and then try to execute the task again.

ThreadPoolExecutor.DiscardPolicy: discards the task and does not throw an exception.

7.2 correct creation method

Creating a thread pool using Executors may result in OOM. The reason is that there are two main implementations of BlockingQueue in the thread pool, which are ArrayBlockingQueue and LinkedBlockingQueue.

ArrayBlockingQueue is a bounded blocking queue implemented in an array, and the capacity must be set.

LinkedBlockingQueue is a bounded blocking queue implemented by a linked list, and the capacity can be set. If not, it will be a borderless blocking queue with a maximum length of Integer.MAX_VALUE, which can easily lead to thread pool OOM.

The correct way to create a thread pool is to directly call the constructor of ThreadPoolExecutor to create your own thread pool. At the same time of creation, specify the capacity of the BlockQueue.

Private static ExecutorService executor = new ThreadPoolExecutor (10,10,60L, TimeUnit.SECONDS, new ArrayBlockingQueue (10))

7.3 Common thread pools

List several common thread pool creation methods.

1.Executors.newFixedThreadPool

Fixed-length thread pool, there are core threads, core threads is the maximum number of threads, there are no non-core threads. The unbounded wait queue used is LinkedBlockingQueue. Be careful to fill the waiting queue when using it.

2.Executors.newSingleThreadExecutor

Create a thread pool with a single number of threads, which ensures first-in-first-out execution order

3.Executors.newCachedThreadPool

Create a cacheable thread pool that can flexibly recycle idle threads if the thread pool is longer than the processing needs, or create a new thread if there is no reclaim.

4.Executors.newScheduledThreadPool

Create a fixed-length thread pool that supports scheduled and periodic task execution

5.ThreadPoolExecutor

The most primitive and common way to create a thread pool, which contains 7 parameters and 4 reject policies are available.

7.4 thread pool core point

Thread pools are commonly used at work, and interviews are required. The details about the thread pool are similar to the previous example of a bank queuing to do business. Generally speaking, the thread pool is nothing more than the following test sites:

Why use thread pools.

The role of the thread pool.

7 important parameters.

4 major refusal strategies.

Common thread pool task queues, how to understand bounded and unbounded.

Commonly used thread pool templates.

How to allocate the number of thread pools, IO-intensive or CPU-intensive.

Set a thread pool priority queue, Runable class to achieve comparable function, the task queue uses a priority queue.

8 、 ThreadLocal

ThreadLocal can be simply understood as a thread-local variable, compared to the idea that synchronized trades space for time. He creates a copy of each thread, isolating threads from each other by accessing internal copy variables. Weak reference knowledge points are used here:

If an object has only weak references, the GC collector reclaims the object's memory when it scans it, regardless of whether it has enough memory or not.

8.1 Core Point

Each Thread maintains a ThreadLocalMap dictionary data structure, and the dictionary key value is ThreadLocal, so when a ThreadLocal object is no longer used (there is no other reference), how can each thread that has been associated with this ThreadLocal clean up this resource in its internal ThreadLocalMap? the ThreadLocalMap in JDK does not inherit the java.util.Map class, but implements a dictionary structure dedicated to regularly cleaning up invalid resources. Its internal storage entity structure Entry

Then analyze the underlying code and find that invalid Entry operations are periodically recycled when either ThreadLocal.get () or ThreadLocal.set () is called.

9 、 CAS

Compare And Swap: compare and swap, mainly through the instructions of the processor to ensure the atomicity of the operation, which contains three operands:

V: variable memory address

A: the old expected value

B: new value to be set

When the CAS instruction is executed, the value of V will be updated with B only if the corresponding value of V is equal to A, otherwise the update operation will not be performed. CAS can lead to ABA problems, excessive loop overhead, and the limitations of atomic manipulation of a shared variable. How to solve it has been written before, so I won't repeat it here.

10 、 Synchronized

10.1 Synchronized explanation

Synchronized is a thread-safe keyword that comes with JDK, which can modify three parts: instance method, static method and code block. This keyword can guarantee mutual exclusion, visibility, order (no rearrangement is solved) but order.

The bottom layer of Syn is actually written by C++ code, and the front of JDK6 is a heavyweight lock. When calling, it involves switching between user mode and kernel state, which is very time-consuming. Before JDK6, Doug Lea wrote the JUC package, which can be easily used to implement locks in user mode. Syn developers were inspired to make various performance upgrades to Syn after JDK6.

10.2 Synchronized underlying layer

Syn involves that the object header contains the object header, populated data, and instance variables. Here is an interview question for Meituan:

Question 1: how many bytes does new Object () occupy?

Markword 8 bytes + classpointer 4 bytes (compressed with calssPointer by default) + padding 4 bytes = 16 bytes

If classpointer compression is not enabled: markword 8 bytes + classpointer 8 bytes = 16 bytes

Question 2: User (int id,String name) User u = new User (1, "Li Si")

Markword 8 bytes + classpointer 4 bytes after classPointer compression is enabled + instance data int 4 bytes + String4 bytes after normal object pointer compression is turned on + padding 4 = 24 bytes

10.3 Synchronized lock upgrade

Synchronized locks have four states after JDK6: no lock, bias lock, lightweight lock and heavyweight lock. These states escalate with the competitive state, and the lock can be upgraded but not degraded, but the biased lock state can be reset to an unlocked state. The general upgrade process is as follows

Lock comparison:

The advantages and disadvantages of lock state apply to scenarios where locking plus unlocking requires no additional cost, and the time difference between locking and unlocking is nanosecond. If there are many competing threads, it will bring additional consumption of lock revocation. In synchronous scenarios where there is basically no competition for other threads, lightweight threads of lock competition will not block but spin, and can improve program response speed. If it is not available all the time, it will consume a small number of CPU threads to compete, and the lock time will not be long. Pursuit of response speed heavy lock thread competition will not cause CPU spin and CPU resource consumption thread blocking, response time is long, many threads compete for locks, cut locks hold for a long time, in pursuit of throughput

10.4 Synchronized cannot prohibit instruction rearrangement, but it can guarantee order.

Instruction rearrangement is a means of acceleration provided by the program runtime interpreter and CPU, which may cause statements to be executed in a different order than expected, but the rearrangement must follow as-if-serial anyway.

The easiest way to avoid rearrangement is to disable processor optimization and instruction rearrangement, such as memory barrier implementation in volatile. Syn is keyword-level exclusive and reentrant locks. When a thread executes a piece of code modified by syn, it is locked and unlocked after execution.

When a piece of code is locked by syn and then unlocked, no other thread can acquire the lock again, and only this locked thread can acquire the lock repeatedly. So the code is executed in a single thread, which satisfies the as-if-serial semantics, and it is precisely because of the as-if-serial semantic guarantee that the orderliness of single threads naturally exists.

10.5 wait false wake-up

False awakening definition:

When a condition is met, many threads are awakened, but only some of them are useful, while others are incorrect.

For example, buying and selling goods, if the goods do not have goods, all consumer threads are in the wait state stutter. At this time, suddenly the producer entered a commodity, awakening all the suspended consumers. It may cause all consumers to continue to execute the code under wait, resulting in an error call.

False awakening reasons:

Because the if will only be executed once, the execution will then be followed by the following execution under if. While, on the other hand, will not execute the following while until the conditions are met.

False wake-up solution:

Use while instead of if when calling wait.

10.6 bottom layer of notify ()

1. Why do wait and notify have to add synchronized locks

The bytecode generated by the synchronized block through javap contains monitorenter and monitorexit instruction threads, and executing the monitorenter instruction can get the monitor of the object, while the wait method is implemented by calling the native method wait (0), which says: The current thread must own this object's monitor.

Does the thread wake up immediately after 2.notify execution?

When notify/notifyAll is called, it does not really release the object lock, but wakes up the waiting thread and puts it into the object's lock pool, but all the threads in the lock pool do not run immediately. Only when the thread with the lock releases the lock after running the code block, other threads can run.

Public void test () {Object object = new Object (); synchronized (object) {object.notifyAll (); while (true) {/ / TODO dead loop causes the lock not to be released. }}}

11 、 AQS

11.1 High frequency test point thread alternating printing

The goal is to print alternately between two threads, with letters before numbers. You can implement it with semaphores, Synchronized keywords and Lock, which is simply implemented in ReentrantLock:

Import java.util.concurrent.CountDownLatch; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class Main {private static Lock lock = new ReentrantLock (); private static Condition C1 = lock.newCondition (); private static Condition c2 = lock.newCondition (); private static CountDownLatch count = new CountDownLatch (1); public static void main (String [] args) {String c = "ABCDEFGHI"; char [] ca = c.toCharArray () String n = "123456789"; char [] na = n.toCharArray (); Thread T1 = new Thread (()-> {try {lock.lock (); count.countDown (); for (char caa: ca) {c1.signal (); System.out.print (caa); c2.await ();} c1.signal ();} catch (InterruptedException e) {e.printStackTrace () } finally {lock.unlock ();}}); Thread T2 = new Thread (()-> {try {count.await (); lock.lock (); for (char naa: na) {c2.signal (); System.out.print (naa); c1.await ();} c2.signal ();} catch (InterruptedException e) {e.printStackTrace () } finally {lock.unlock ();}}); t1.start (); t2.start ();}}

11.2 AQS bottom layer

We used ReentrantLock and Condition in the previous question, but how are they implemented at the bottom? In fact, they are based on AQS synchronization queue and wait queue implementation!

11.2.1 AQS synchronization queue

Before learning AQS CAS + spin + LockSupport + template mode must be able to facilitate understanding of the source code, feel simpler than Synchronized, because it is a simple Java code. I personally understand that AQS has the following characteristics:

Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community

In the AQS synchronization queue,-1 indicates that the thread is sleeping

The current Node node thread will put the previous Node.ws =-1. The current node sets the previous node ws to-1, which you can understand as: can you know that you are asleep? Other people can only see you and find you asleep!

The thread that holds the lock is never in the queue.

The second thread in the AQS queue is the first thread to queue.

If it is an alternate task or a single-threaded task, even if Lock is used, the AQS queue will not be involved.

Do not easily park threads as a last resort, it is very time-consuming! So the queued head thread spins several attempts to acquire the lock.

This is not to say that CAS is necessarily better than SYN. If high concurrency takes a long time, it is better to use SYN, because the underlying layer of SYN uses wait () blocking and does not consume CPU resources. If the lock competition is not fierce, it means that the spin is not serious, then use CAS.

You should also avoid calling CLH queues in AQS as much as possible, because CLH may call park, which is relatively time-consuming.

ReentrantLock underlying layer:

11.2.2 AQS waiting queue

When we call await and signal in Condition, the bottom layer actually goes like this.

12. Thread thinking

12.1. Variables recommend stack closure

All variables are declared inside the method, and they are in a stack-closed state. There is a stack frame when the method is called, which is a separate space. It is absolutely safe to create and use in this separate space, but be careful not to return this variable!

12.2. Prevent thread hunger

Low-priority threads always do not get the opportunity to execute, generally to ensure adequate resources, fair allocation of resources, to prevent the thread holding the lock from executing for a long time.

12.3 Development steps

Multithreaded programming should not be used for use. the introduction of multithreading will introduce additional overhead. Volume application performance is average: service time, delay time, throughput, scalability. When making an application, you can generally follow the following steps:

First ensure the correctness and robustness of the program, really can not meet the performance requirements and then think about how to speed up.

Be sure to take the test as the benchmark.

There is always a serial part of a program.

Armed with a sharp weapon: Amdahl's law Signor1 / (1-a+a/n)

In Amdahl's law, an is the proportion of parallel computing, and n is the number of parallel processing nodes.

When 1-a=0, (that is, no serial, only parallel) maximum speedup

When await 0 (that is, only serial, not parallel), the minimum speedup is slot1.

When n is infinite, the limit speedup is s → 1 / (1mera), which is the upper limit of the speedup. For example, if the serial code accounts for 25% of the total code, the overall performance of parallel processing cannot exceed 4.

12.4 factors affecting performance

Narrow the scope of the lock, lock the method block and try not to lock the function.

Reduce lock granularity and lock segmentation, such as the implementation of ConcurrentHashMap.

Use read-write locks when reading more and writing less, which can improve the performance tenfold.

Replace the heavy lock with the CAS operation.

Try to use the common concurrency containers that come with JDK, and the underlying layer is optimized enough.

At this point, the study on "how to master Java concurrency" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report