Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The principle and Application of Java concurrent programming

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article introduces the relevant knowledge of "the principle and application of Java concurrent programming". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

1. Brief introduction of Java concurrent programming Foundation 1.1 Thread

The smallest unit of modern operating system scheduling is threads, also known as lightweight processes. Multiple threads can be created in a single thread, all of which have their own properties such as counters, stacks and local variables. and can access shared memory variables. The processor switches at a high speed on these threads, making users feel that these threads are executing at the same time.

The main reasons for using multithreading are more processor cores, faster response time, and a better programming model (Java provides a good, sophisticated and consistent programming model for multithreaded programming, allowing developers to focus more on problem solving). However, there are still the following problems to be solved in multithreaded programming: thread safety, thread activity, context switching, reliability and so on.

The time slice allocated by the thread determines how much the thread uses the processor resources, and the thread priority is the thread attribute that determines whether the thread needs to allocate more or less processor resources. However, thread priority cannot be regarded as a dependence on program correctness, because the operating system can completely ignore the priority setting of Java threads.

The main states of a thread are NEW (created and not started), RUNNABLE (divided into READY/RUNNING, the former indicates that it can be scheduled by a thread scheduler, and the latter indicates that it is running), BLOCKED (blocking Imax O, exclusive resources such as locks, do not occupy processor resources), WAITING (wait/join/park method, notify/notifyAll/unpark method recovery), TIMED_WAITING (wait/join/sleep set time method, similar to WAITING, finite wait), TERMINATED (end state) Normal return / throw exception terminated early)

Figure-State of the Java thread

How to monitor threads? The main way is to obtain and view the thread dump (Thread Dump) of the program, as follows:

Figure-method of getting thread dump

Daemon thread is a kind of supporting thread, which is mainly used for background scheduling and supporting work in the program, which means that when a virtual machine does not have a non-Java thread, the Daemon virtual machine will be launched. When building Daemon threads, you cannot rely on the contents of the finally block to ensure that the logic of shutting down or cleaning up resources is performed.

1.2 start and terminate threads

Interrupts can be understood as an identity attribute of a thread, which is not a forced termination of a thread, but a cooperative mechanism that sends a cancellation signal to the thread, but it is up to the thread to decide how and when to exit. For program modules that serve as threads, cancel / close operations should be encapsulated, providing separate cancel / close methods to the caller (you can also control whether to stop the task and terminate the thread by setting the boolean variable), and external callers should call these methods instead of calling interrupt directly.

Thread pauses, resumes, stops operations suspend, resume, stop have been discarded.

1.3 Communication between threads

Java's built-in mechanism for waiting for notification is as follows:

Figure-related methods for waiting / notification

The waiting notification mechanism means that one thread A calls the object O's wait method to enter the waiting state, while another thread B calls the object O's notify or notifyAll method. Thread A receives the notification and returns from the object O's wait method, and then performs subsequent operations. The above two threads interact with each other through object O, and the relationship between wait and notify/notifyAll on the object is like a switch signal, which is used to complete the interaction between the waiting party and the notifying party.

Thread An executes the threadB.join () statement meaning that the current thread A waits for the thread to terminate before returning from threadB.join (), and supports the timeout mechanism. If thread threadB does not terminate within a given timeout, it will return directly from the timeout method. With the help of the waiting notification mechanism, the underlying implementation of join invokes the thread's own notifyAll method when the thread terminates, notifying all threads waiting on the thread object.

1.4 Thread application instance 1.4.1 wait timeout mode

In practice, developers often encounter such call scenarios: when a method is called, it waits for a certain period of time, and if the method can get the result within a given period of time, it will return immediately, otherwise, the timeout will return the default result.

Define the following variables: wait duration, remaining = T, timeout, future = now + T. The pseudo code for the implementation is as follows:

Public synchronized Object get (long mills) throws InterruptedException {/ / returns the result Object result = new Object (); long future = System.currentTimeMillis () + mills; long remaining = mills / / when the timeout is greater than 0 and the returned value does not meet the requirements while (result = = null & & remaining > 0) {wait (remaining); remaining = future-System.currentTimeMillis ();} return result } 2. Java memory Model (JMM) 2.1Basics of Java memory Model

In concurrent programming, two key problems need to be solved: how to communicate between threads and how to synchronize between threads. Communication refers to the mechanism by which threads exchange information. There are generally two kinds of communication: shared memory and message passing. In the concurrency model of shared memory, threads share the common state of programs and implicitly communicate through the common state in read-write memory. In the concurrency model of messaging, there is no common state between threads, and threads must communicate explicitly by sending messages.

Synchronization refers to the mechanism used to control the relative order of operations between different threads in a program. In the shared memory concurrency model, synchronization is explicit, and programmers must explicitly specify that a method or piece of code needs to be mutually exclusive between threads. In the concurrency model of message delivery, synchronization is implicit because the message must be sent before it is received.

The concurrency of Java uses a shared memory model, the communication between threads is always implicit, and the whole communication process is completely transparent to programmers.

All instance fields, static fields, and array elements (shared variables) in Java are stored in heap memory, and there are shared among threads, local variables, method definition parameters and exception handling parameters are not shared between threads, so there are no memory visibility problems and are not affected by the memory model. Communication between Java threads is controlled by the Java memory model (JMM), and JMM determines when writes by one thread to shared variables are visible to another thread. As shown in the following figure, to communicate between threads An and B, you must go through the following two steps:

Thread A flushes the updated shared variables in local memory A to main memory.

Thread B reads the shared variables that have been updated before An in the middle of main memory.

On the whole, these two steps are actually thread A sending a message to thread B, and the communication process must pass through the main memory. JMM provides programs with memory visibility guarantees by controlling the interaction between the main memory and the local memory of each thread.

Figure-abstract schematic diagram of Java memory model

In order to improve the performance of programs, compilers and processors often reorder instructions. Reordering may lead to memory visibility problems in multithreaded programs. JMM forbids specific types of compiler reordering and processor reordering by inserting specific types of memory barriers.

Figure-A schematic diagram of the instruction sequence from the source code to the final execution

Figure-memory Barrier Type Table

2.2 reordering

In JMM, if the result of one operation needs to be visible to another operation, then there must be a happens-before relationship between the two operations, and the happens-before rules correspond to one or more compiler and processor reordering rules. Having a happens-before relationship between two operations does not mean that the previous operation must be performed before the latter operation, only that the previous operation (the result of execution) is visible to the latter operation, and that the previous operation comes before the second operation in order. Common rules are: monitor lock rules, for the unlock of a lock, happens-before subsequently lock the lock; volatile variable rules, for write of the volume domain, happens-before subsequent reading of the field; and happens-before transitivity.

A happens-before BJMM does not require that A be executed before B. JMM only requires the previous operation (the result of execution) to be visible to the latter operation, and the previous operation comes before the second operation in order. Here, the execution result of operation A does not need to be visible to operation B, and the execution result after reordering operation An and operation B is consistent with the result that operation An and operation B execute in happens-before order. In this case, JMM will think that this reordering is not illegal (not illegal), and JMM allows such reordering. Do: provide parallelism as much as possible without changing the execution results of the program.

As-if-serial semantics means that no matter how much you reorder (the compiler and processor to provide parallelism), the execution result of the (single-threaded) program cannot be changed, and the compiler and processor will not reorder operations that have data dependencies (read after write, write after write, write after read), because such reordering will change the execution result, but if there is no data dependency, it may be reordered.

In multithreading, reordering operations that have control dependencies does not change the execution result. In multithreaded programs, reordering operations that have control dependencies may change the execution results of the program.

The sequential consistency memory model stipulates that all operations in a thread must be performed in the order of the program; regardless of whether the program is synchronized or not, all threads can only see a single order of operation execution, in the sequential consistency model, each operation must be performed atomically and immediately visible to all threads.

For a case of reordering, please refer to: taro source code-[Java concurrency]-reordering of Java memory model

About JMM, you can refer to Hollis-. If someone asks you what the Java memory model is, send him this article, Hollis-JVM memory structure, VS Java memory model, VS Java object model.

3. Locks in Java

Taro source code-summary of all kinds of Java locks

Ingenuity Zero degree-interviewer Q: what are the locks in Java? I'm on my knees.

3.1 Lock interface

The Lock interface implements the lock function, which is similar to synchronize, and supports the synchronization characteristics that synchronize keyword locks do not have, such as the maneuverability of lock acquisition and release, interruptible acquisition lock and timeout acquisition lock.

Figure-main features of Lock interface

Figure-Lock API

3.2 queue Synchronizer

Queue synchronizer AQS, used to build locks or other synchronization components of the basic framework, it uses an int member variable to represent the synchronization status, through the built-in FIFO queue to complete the resource acquisition thread queuing work. The synchronizer is the key to realize the lock (any synchronization component). The lock is user-oriented, which defines the interface between the user and the lock and hides the implementation details; the synchronizer is oriented to the implementer of the lock, which simplifies the implementation of the lock. shielding low-level operations such as synchronization state management, thread queuing, waiting and waking up.

The specific implementation principle can be referred to: talk about Java concurrent interview questions in vernacular and talk about your understanding of AQS? [notes on the structure of Huperzhun]

3.3 reenter the lock

Represents the ability to support repeated locking of resources by a thread. The synchronized keyword implicitly supports reentrant locks, such as a recursive method modified by the synchronized keyword. When the method is executed, the thread of execution can acquire the lock several times in succession after acquiring the lock. As the name implies, ReentrantLock also supports repeatability. Reentry means that any thread can acquire the lock again without blocking after it has been acquired. There are two issues to consider:

The thread acquires the lock again, and the lock needs to identify whether the thread acquiring the lock is the thread currently occupying the lock, and if so, it is successful again.

After the lock is finally released, the thread repeatedly acquires the lock n times, and then after releasing the lock for the nth time, other threads can acquire the lock. The final release of the lock requires the lock to count itself for acquisition. The count represents the number of times the current lock has been repeatedly acquired, while when the lock is released, the count is reduced. When the count equals 0, the lock has been released successfully.

The difference between fair and unfair (default implementation) lock acquisition lies in whether it is necessary to wait for the thread that acquires the lock to release the lock earlier than the current thread. Unfair locking may lead to the problem of "hunger". The cost of fair locking is a large number of thread switching in accordance with the principles of FIFO, which requires a tradeoff between fairness and throughput.

3.4 read-write lock

The separation of read and write locks greatly improves concurrency compared with general exclusive locks, which is especially suitable for scenarios with more reads and less writes (Nuggets-optimized read-write locks in microservice registries for talking about Java concurrent interview questions in vernacular [Huperzhun Architecture Notes])

Public class Cache {/ / Thread unsafe map, which ensures thread safety through read-write locks static Map map = new HashMap (); static ReentrantReadWriteLock rwl = new ReentrantReadWriteLock (); static Lock r = rwl.readLock (); static Lock w = rwl.writeLock (); public static final Object get (String key) {r.lock () Try {return map.get (key);} finally {r.unlock ();}} public static final Object put (String key, Object value) {w.lock () Try {return map.put (key, value);} finally {w.unlock ();}

About the internal principle of read-write lock can refer to: Java concurrent programming lock mechanism ReentrantReadWriteLock (read-write lock)

3.5 LockSupport tools

The basic tools for building synchronous components that provide the most basic thread blocking and wake-up functions.

Figure-blocking and Wake up methods provided by LockSupport

3.6 Condition interface

Condition is a conditional queue in a broad sense, which provides a more flexible wait / notification mode for threads. The thread performs a suspend operation after calling the await method and is not awakened until a condition under which the thread is waiting is true. Condition must be used with locks because access to shared state variables occurs in a multithreaded environment. An instance of Condition must be bound to a Lock, so Condition is generally implemented as an internal Lock. The most typical application scenarios are producer / consumer model, ArrayBlockQueue and so on.

The specific implementation principle can be referred to: ingenuity Zero degree-- Java concurrency]-- J.U.C Condition

3.7 synchronized

For synchronization methods, JVM uses ACC_SYNCHRONIZED tokens to achieve synchronization; for synchronization code blocks, JVM uses monitorenter and monitorexit instructions to achieve synchronization. Synchronized can guarantee atomicity, visibility and order.

Synchronized is a heavy lock. Jdk1.6 introduces a lot of optimization to the implementation of lock, such as spin lock, adaptive spin lock, lock elimination, lock coarsening, bias lock, lightweight lock and so on to reduce the cost of lock operation.

For in-depth analysis, please refer to:

In-depth understanding of multithreading (1)-- the implementation principle of Synchronized

In-depth understanding of Multithreading (2)-- the object Model of Java

In-depth understanding of Multithreading (3)-- the object head of Java

In-depth understanding of multithreading (4)-- the implementation principle of Moniter

In-depth understanding of multithreading (5)-- lock optimization technology of Java virtual machine, InfoQ- chat concurrency (2)-Synchronized (lock upgrade optimization) in Java SE1.6

If anyone asks you what synchronized is, send him this article.

3.8 volatile

Lightweight synchronized ensures the visibility of shared variables in multiprocessor development. Visibility means that when one thread modifies a shared variable, another thread can read the modified value. JMM ensures that all threads see the same value of the variable.

For in-depth analysis, please refer to:

In-depth understanding of the volatile keyword in Java

If anyone asks you what volatile is, send him this article as well.

4. Java concurrency Container and Framework 4.1 ConcurrentHashMap

Java Classic interview question: why don't ConcurrentHashMap read operations need to be locked?

Pure smile: the interview must be asked: the specific implementation of ConcurrentHashMap thread safety

Hollis: explain the optimization of ConcurrentHashMap and JDK8 in detail

Taro source code: not only the HashMap of JDK7, but also the ConcurrentHashMap of JDK8 will cause 100% of CPU? Cause and solution ~

Programmer DD: interpreting the ConcurrentHashMap created for concurrency in Java 8

Why did CSDN-ConcurrentHashMap (JDK1.8) give up Segment?

4.2 ConcurrentLinkedQueue

Based on the unbounded thread safety queue of linked nodes, it uses the first-in-first-out rule to sort the nodes and uses the CAS algorithm to implement. So how does it achieve thread safety, inbound and outbound functions operate volatile variables: head, tail, so to ensure queue thread safety only needs to ensure the visibility and atomicity of the two Node operations, because volatile itself ensures visibility, so you only need to look at multithreading if you ensure atomicity against two variables. For the offer operation, the element is added after the tail, that is, the tail.casNext method is called, and this method is the CAS operation used, and only one thread succeeds, and then the failed thread loops, retrieves the tail, and then executes the casNext method, as well as for poll.

For in-depth analysis, please refer to:

Research on ConcurrentLinkedQueue principle of concurrent programming Network-concurrent queue-Unbounded non-blocking queue

Detailed explanation of CSDN-Java concurrent programming with ConcurrentLinkedQueue

4.3 blocking queues in Java

Blocking queues are queues that support blocking insertion and removal of elements, often used in producer and consumer scenarios.

JDK provides seven blocking queues:

ArrayBlockingQueue: array, bounded, FIFO, default unfair LinkedBlockingQueue: linked list, bounded, default and maximum length Integer.MAX_VALUEPriorityBlockingQueue: support priority (collation), unbounded DelayQueue: support delay, unbounded SynchronousQueue: no storage elements, transitivity scenario, high throughput LinkedTransferQueue: linked list, unbounded, preemption mode, ConcurrentLinkedQueue, SynchronousQueue (in fair mode), unbounded LinkedBlockingQueues, etc. Superset LinkedBlockingDeque: linked list, bidirectional, capacity optional

For in-depth use and analysis, please refer to:

Programmer DD- banging Java concurrency: blocking queue of J.U.C: LinkedTransferQueue

Programmer DD- banging Java concurrency: blocking queue of J.U.C: LinkedBlockingDeque

InfoQ- chat concurrency (7)-- blocking queues in Java

The implementation principle of blocking queue is realized by notification mode, and ArrayBlockingQueue is realized with the help of notEmpty and notFull Condition. When a thread is blocked by a blocking queue, the thread enters the WAITING (parking) state.

4.4 Fork/Join framework

Fork/Join is a framework for dividing and merging subtasks. The main steps are as follows: dividing tasks into sufficiently small tasks; executing tasks and merging results, placing subtasks in a double-ended queue (job theft), and then starting threads to get task execution from two-ended queues, and the results of subtasks are put together in a queue. Start a thread to take data from the queue, and then merge the data.

In-depth analysis and use reference:

InfoQ- chat concurrency (8)-- introduction to Fork/Join Framework

5. Java atomic operation class

Including atomic update basic types AtomicBoolean, AtomicInteger, AtomicLong; atomic update array types AtomicIntegerArray, AtomicLongArray, AtomicReferenceArray; atomic update reference types AtomicReference, AtomicReferenceFieldUpdater, AtomicMarkableReference; atomic update field types AtomicIntegeFieldUpdater, AtomicLongFieldUpdater, AtomicStampedReference (solve the ABA problem of CAS).

Atomic operation class provides a more lightweight atomic implementation scheme than direct locking. It adopts the idea of optimistic lock, that is, conflict detection + data update, based on CAS implementation (optimistic lock is an idea, CAS is an implementation of this idea). The approach of CAS is very simple: compare and replace, three parameters, a current memory value V, the old expected value A, and the value B to be updated. If and only if the expected value An is the same as the memory value V, change the memory value to B and return true, otherwise do nothing and return false. The underlying implementation, Unsafe is the core class of CAS, Java does not have direct access to the underlying operating system, but through local (native) methods, but JVM still opens a back door: Unsafe, which provides hardware-level atomic operations.

Compared with other locks, CAS does not operate in kernel state, so it has some performance improvements. But spin is introduced at the same time, when the lock competition is large, the number of spins will increase, the cycle time will be too long, and the cpu resources will be consumed very high. In other words, CAS+ spins are suitable for use in low concurrency application scenarios where the competition is not fierce. Four new counter types, LongAdder, LongAccumulator, DoubleAdder and DoubleAccumulator, are introduced in Java 8. The main idea is that when the competition is not fierce, all threads modify the same variable (Base) through CAS, and when the competition is fierce, they will be modified to the corresponding Cell according to the current thread hash (multi-segment lock). The main principle is to ensure atomicity through CAS optimistic locks and to ensure the final success of the current modification through spin. Increase concurrency performance by reducing lock granularity (multi-segment locks).

At the same time, CAS can only guarantee a shared variable atomic operation, if it is multiple shared variables can only use locks, of course, if you have a way to integrate multiple variables into one variable, the use of CAS is also good. For example, the high status of state in read-write locks.

CAS only compares values, in general scenarios will not cause logic errors (such as balance), but in special cases, although the value is the same, but may already be this An is not the other A (for example, stack in concurrent cases), so CAS can not only compare values, but also ensure that the original data can be modified successfully. One way is to upgrade the value comparison to the version number comparison, one data version, version change. Even if the value is the same, it should not be modified successfully. (refer to the architect's way-concurrent debit consistency optimization, ABA under CAS, this topic is not finished). Java provides AtomicStampedReference to solve the problem, and AtomicStampedReference tags the object version stamp stamp by wrapping the tuples of [EMagne Integer], thus avoiding the ABA problem.

6. Concurrent utility classes in Java

Java concurrency containers, frameworks, utility classes

This is the end of the introduction to "the principles and applications of Java concurrent programming". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report