Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the core knowledge of Java multithreading?

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "what is the core knowledge of Java multithreading". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "what is the core knowledge of Java multithreading"!

Compared with other Java knowledge points, multithreading has a certain learning threshold, and it is more difficult to understand. In normal work, if you use it improperly, there will be problems such as data confusion, low execution efficiency (it is better to run with a single thread) or deadlock programs, etc., so it is very important to master and understand multithreading.

This article starts from the basic concept to the final concurrency model from shallow to deep, and explains the knowledge of threads.

Concept carding

In this section I will take you through some of the basic concepts of multithreading.

Concurrency and parallelism

Parallel, which means that two threads are doing things at the same time.

Concurrency, which means one thing to do, another thing to do, there is scheduling. There can be no parallelism in single-core CPU (microscopically).

1.jpeg

Critical area

The critical section is used to represent a common resource or shared data and can be used by multiple threads. But only one thread can use it at a time, and once the critical section resource is occupied, other threads must wait to use it.

3.jpeg

Live lock

Suppose there are two threads 1 and 2, and they both need the resource A, and suppose thread 1 has the A resource, and thread 2 has the B resource. Since both threads need to own both resources to work, thread 1 releases the A resource possession lock and thread 2 releases the B resource possession lock to avoid deadlock. When the AB is idle and two threads grab the lock at the same time, the above situation occurs again, and a live lock occurs.

Simple analogy, the elevator meets people, one in and one out, the opposite occupies the road, two people make way in the same direction at the same time, repeat back and forth, or block the road.

If the online application encounters a live lock problem, congratulations on winning the lottery, this kind of problem is more difficult to troubleshoot.

hunger

Hunger means that one or more threads cannot get the resources they need for a variety of reasons, resulting in a constant inability to execute.

Life cycle of a thread

In the life cycle of a thread, it goes through several states: created, runnable, and unrunnable.

Create statu

When a new thread object is created with the new operator, the thread is in the created state.

The thread in the created state is just an empty thread object and the system does not allocate resources to it.

Runnable state

The start () method of the executing thread allocates the necessary system resources to the thread, arranges it to run, and calls the thread body-- the run () method, which makes the thread runnable (Runnable).

This state is not Running, because the thread may not actually be running.

Non-runnable state

A running thread goes to an unrunnable state when the following events occur:

The sleep () method is called

The thread calls the wait () method to wait for a specific condition to be met

Thread input / output blocking

Return to runnable state

After the specified time has passed, the thread in the sleeping state

If a thread is waiting for a condition, another object must notify the waiting thread of a change in the condition through the notify () or notifyAll () method

If the thread is blocked by input and output, wait for the input and output to complete.

Thread priority thread priority and setting

The priority of the thread is to facilitate the scheduling of the thread in the multi-thread environment, and the thread with high priority will be executed first. The priority setting of a thread follows the following principles:

When a thread is created, the child inherits the priority of the parent

After the thread is created, the priority can be changed by calling the setPriority () method

The priority of the thread is a positive integer between 1 and 10.

Scheduling strategy of threads

The thread scheduler selects the thread with the highest priority to run. However, the running of the thread is terminated if the following occurs:

The yield () method is called in the thread body, giving up the right to occupy CPU.

The sleep () method is called in the thread body to put the thread to sleep.

The thread is blocked due to the Icano operation.

Another higher priority thread appears.

In systems that support time slices, the thread runs out of time slices.

Single-thread creation mode

Single-thread creation is relatively simple. Generally, there are only two ways: inheriting the Thread class and implementing the Runnable interface. These two methods are not in Demo, but there are some issues that beginners need to pay attention to:

Whether inheriting the Thread class or implementing the Runable interface, the business logic is written in the run method, and the thread executes the start () method when it starts

Starting a new thread does not affect the code execution order of the main thread and does not block the execution of the main thread.

The code execution order of the new thread and the main thread cannot be guaranteed.

For multithreaded programs, microscopically speaking, there is only one thread working at a time, and multithreading is designed to keep CPU busy.

By looking at the source code of Thread, you can see that the Thread class implements the Runnable interface, so the two are essentially a

PS: this kind of code structure can also be used for reference in daily work to provide more choices for upper layer invocation and to maintain the core business as a service provider.

Why use thread pools?

Through the above introduction, it is entirely possible to develop a multi-threaded program, why introduce a thread pool. This is mainly due to the following problems with the above single-threaded approach:

The working cycle of a thread: the time required for thread creation is T1, the time for thread execution is T2, and the time required for thread destruction is T3, often the T1+T3 is greater than T2, so creating threads frequently will waste too much extra time.

If a task comes, it is less efficient to create threads, and it will be more efficient if you can get available threads directly from a pool. Therefore, the thread pool saves the task and the process of creating a thread before executing it, which saves time and improves efficiency.

Thread pool can manage and control threads, because threads are scarce resources, if created without restriction, it will not only consume system resources, but also reduce the stability of the system. Thread pools can be used for unified allocation, tuning and monitoring.

The thread pool provides queues for buffering tasks waiting to be executed.

A general summary of the above reasons, so we can draw a conclusion is that in normal work, if you want to develop multithreaded programs, try to use thread pool to create and manage threads.

Thread creation through thread pool is divided into two types from the point of view of calling API, one is native thread pool, and the other is created through concurrent package provided by Java, which is relatively simple. The latter is actually a simplified wrapper of the native thread pool creation method, making it more convenient for callers to use, but the reason is the same. So it's important to understand how native thread pools work.

ThreadPoolExecutor

Create a thread pool through ThreadPoolExecutor, as shown in API:

/ * public ThreadPoolExecutor (int corePoolSize,int maximumPoolSize,long keepAliveTime, * TimeUnit unit,BlockingQueue workQueue) * corePoolSize is used to specify the number of core threads * maximumPoolSize specifies the maximum number of threads * keepAliveTime and TimeUnit specify the maximum survival time after threads are idle * workQueue is the buffer queue of the thread pool, and unexecuted threads wait in the queue * monitor the queue length Make sure the queue is bounded * improper thread pool size can slow processing, reduce stability, and lead to memory leaks. If too few threads are configured, the queue will continue to grow and consume too much memory. * too many threads will slow down the whole system due to frequent context switching. The length of the queue is critical, and it must be bounded so that if the thread pool is overwhelmed, it can temporarily reject new requests. * the default implementation of ExecutorService is an unbounded LinkedBlockingQueue. * / private ThreadPoolExecutor executor = new ThreadPoolExecutor (corePoolSize, corePoolSize+1, 10l, TimeUnit.SECONDS, new LinkedBlockingQueue (1000))

Let's first explain the meaning of the parameters (if you look at it vaguely, you can have a rough impression, and the following picture is the key).

CorePoolSize

The size of the core pool.

After the thread pool is created, by default, there are no threads in the thread pool, but wait for a task to arrive before creating a thread to execute the task, unless the prestartAllCoreThreads () or prestartCoreThread () method is called, which means pre-created thread, that is, to create a corePoolSize thread or a thread before no task arrives. By default, after the thread pool is created, the number of threads in the thread pool is 0. When a task comes, a thread is created to execute the task. When the number of threads in the thread pool reaches corePoolSize, the arriving task is placed in the cache queue.

MaximumPoolSize

The maximum number of threads in the thread pool, which is also a very important parameter that indicates the maximum number of threads that can be created in the thread pool.

KeepAliveTime

Indicates how long the maximum duration of a thread will terminate when there is no task execution. By default, keepAliveTime works only if the number of threads in the thread pool is greater than corePoolSize, until the number of threads in the thread pool is not greater than corePoolSize, that is, when the number of threads in the thread pool is greater than corePoolSize, if a thread is idle for up to keepAliveTime, it terminates until the number of threads in the thread pool does not exceed corePoolSize.

However, if the allowCoreThreadTimeOut (boolean) method is called, and the number of threads in the thread pool is not greater than corePoolSize, the keepAliveTime parameter also works until the number of threads in the thread pool is 0.

Unit

The time unit of the parameter keepAliveTime.

WorkQueue

A blocking queue is used to store tasks waiting to be executed. The choice of this parameter is also important and will have a significant impact on the running process of the thread pool. Generally speaking, the blocking queue here has the following options: ArrayBlockingQueue, LinkedBlockingQueue, SynchronousQueue.

ThreadFactory

Thread factory, mainly used to create threads.

Handler

Indicates that the policy when rejecting a task has the following four values:

ThreadPoolExecutor.AbortPolicy: discards the task and throws a RejectedExecutionException exception

ThreadPoolExecutor.DiscardPolicy: also discard tasks, but do not throw exceptions

ThreadPoolExecutor.DiscardOldestPolicy: discard the first task in the queue and then try to execute the task again (repeat this process)

ThreadPoolExecutor.CallerRunsPolicy: the task is handled by the calling thread.

How do the above parameters work together? Please take a look at the following picture:

5.jpeg

These are the core principles of native thread pool creation. In addition to native thread pools, concurrent packages also provide a simple way to create, as mentioned above, they are a wrapper for native thread pools that allow developers to easily and quickly create the thread pools they need.

ExecutorsnewSingleThreadExecutor

Create a thread pool of threads in which only one thread always exists. If a thread in the thread pool exits because of an exception, a new thread will replace it. This thread pool ensures that all tasks are executed in the order in which they are submitted.

NewFixedThreadPool

Create a fixed-size thread pool. Each time a task is submitted, a thread is created until the thread reaches the maximum size of the thread pool. Once the maximum size of the thread pool is reached, it will remain the same, and if a thread ends because of an execution exception, the thread pool will add a new thread.

NewCachedThreadPool

The number of threads in the thread pool can be adjusted according to the actual situation. The number of threads in the thread pool is uncertain. If there are idle threads, they will be preferred. If there are no idle threads and there is a task submission at this time, a new thread will be created. This thread pool is not recommended in normal development, because in extreme cases, CPU and memory resources are exhausted because newCachedThreadPool creates too many threads.

NewScheduledThreadPool

This thread pool can specify a fixed number of threads to execute periodically. For example, specify the cycle time through scheduleAtFixedRate or scheduleWithFixedDelay.

PS: in addition, when writing timing tasks (if you don't use the Quartz framework), it's best to use this thread pool because it ensures that there are always living threads in it.

ThreadPoolExecutor is recommended.

One item in Ali's Java development manual is that it is not recommended to use Executors to create, but to use ThreadPoolExecutor to create thread pools.

The main reason for this is that using Executors to create a thread pool will not pass the core parameters, but will use the default value, so we often ignore the meaning of the parameters. If the business scenario is demanding, there is a risk of resource exhaustion. In addition, the use of ThreadPoolExecutor allows us to have a clearer understanding of the running rules of the thread pool, which is of great benefit to both interview and technical growth.

If you change the variable, other threads can know immediately. There are several ways to ensure visibility:

Volatile

Variables that add the volatile keyword will be assembled with an extra lock prefix instruction, which acts as a memory barrier that ensures the order of memory operations. When a variable declared as volatile is written, the variable needs to write the data to main memory.

Because the processor implements the cache consistency protocol, writing to the main memory will cause the cache of other processors to be invalid, that is, the thread working memory is invalid and the data needs to be refreshed from the main memory.

At this point, I believe you have a deeper understanding of "what is the core knowledge of Java multithreading". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report