Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the implementation principle of system thread?

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces "what is the implementation principle of the system thread". In the daily operation, I believe that many people have doubts about the implementation principle of the system thread. The editor consulted all kinds of data and sorted out the simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "what is the implementation principle of system threads?" Next, please follow the editor to study!

Various operating systems provide the implementation of threads (kernel threads), and threads are the basic unit of work scheduling in CPU. Thread is a lighter scheduling execution unit than a process. The introduction of thread can separate the resource allocation and execution scheduling of a process. Each thread can not only share process resources (memory address, file ID O, etc.), but also schedule independently (thread is the basic unit of CPU scheduling). Programming languages generally provide API for operating kernel threads, and Java is no exception. There are three main models for operating kernel threads:

Using kernel threads (1:1 model)

Use user threads (1vv N model)

Using user threads + lightweight processes (LWP) (NRV M model)

Review of basic concepts

Let's review a few key concepts in the operating system:

Kernel thread KLT: kernel-level thread (Kemel-Level Threads, KLT also has threads supported by kernel), directly supported by the operating system kernel, thread creation, destruction, switching expensive user thread UT: user thread (User Thread,UT), built in user space, the system kernel is not aware of the existence of user thread, thread creation, destruction, switching overhead small lightweight process LWP: (LWP Light weight process) the middle layer between the user-level thread and the kernel-level thread is the implementation of the interface provided by the operating system to the user to operate the kernel thread.

Process P: user process

Three threading models of operating system

Here are three threading models in turn:

Kernel threading model:

The kernel thread model completely relies on the kernel thread (Kernel-Level Thread, KLT) provided by the operating system kernel to realize multithreading. In this model, the switching and scheduling of threads is completed by the system kernel, which is responsible for mapping the tasks executed by multiple threads to each CPU.

Programs generally do not use kernel threads directly, but use a high-level interface of kernel threads-lightweight processes (Light Weight Process,LWP). Lightweight processes are what we usually call threads. Because each lightweight process is supported by a kernel thread, there can be lightweight processes only if kernel threads are supported first. This 1:1 relationship between lightweight processes and kernel threads is called an one-to-one threading model.

User threading model:

In a broad sense, as long as a thread is not a kernel thread, it can be regarded as a user thread (User Thread,UT). Therefore, from this definition, a lightweight process also belongs to a user thread, but the implementation of a lightweight process is always based on the kernel, and many operations have to be called by the system, so the efficiency will be limited.

The advantage of using user thread is that it does not need the support of the system kernel, and the disadvantage is that without the support of the system kernel, all thread operations need to be handled by the user program itself. Thread creation, switching, and scheduling are all issues that need to be considered, and because the operating system only allocates processor resources to processes, problems such as "how to handle blocking" and "how to map threads to other processors in multiprocessor systems" will be extremely difficult or even impossible to solve.

Therefore, programs implemented by user threads are generally more complex. The "complexity" and "programs that complete thread operations by themselves" here do not limit the need to write complex code to implement user threads and programs that use user threads. Many of them rely on specific thread libraries to complete basic thread operations, and these complexities are encapsulated in the thread library. Except for the multithreaded programs in operating systems that do not support multithreading (such as DOS) and a small number of programs with special needs, there are fewer and fewer programs that use user threads. Java, Ruby and other languages have used user threads, and finally give up using it.

Mixed threading model:

In addition to relying on kernel threads and being implemented entirely by user programs, there is also an implementation that uses kernel threads with user threads. In this hybrid implementation, there are both user threads and lightweight processes.

User threads are still completely built in user space, so user thread creation, switching, destructing and other operations are still cheap, and can support large-scale concurrency of user threads. The lightweight process supported by the operating system acts as a bridge between the user thread and the kernel thread, so that the thread scheduling function and processor mapping provided by the kernel can be used. And the system call of the user thread is completed through the lightweight thread, which greatly reduces the risk that the whole process is completely blocked.

In this mixed mode, the ratio of the number of user threads to lightweight processes is variable, that is, the relationship of NVR M. Many UNIX operating systems, such as Solaris, HP-UX, etc., provide the implementation of the threading model.

For Sun JDK, both the Windows and Linux versions are implemented using an one-to-one threading model, where a Java thread is mapped to a lightweight process, because the threading model provided by Windows and Linux systems is one-to-one. In the Solaris platform, because the threading feature of the operating system can support both one-to-one (through BoundThreads or Alternate Libthread) and many-to-many (through LWP/Thread Based Synchronization) threading model, the Solaris version of JDK also provides two platform-specific virtual machine parameters:-XX:+UseLWPSynchronization (default) and-XX:+UseBoundThreads to specify which thread model the virtual machine uses.

Thread scheduling mode of operating system

Thread scheduling refers to the process that the system allocates the right to use the processor to the thread.

There are two main thread scheduling methods, which are collaborative thread scheduling (Cooperative Threads-Scheduling) and preemptive thread scheduling (Preemptive Threads-Scheduling).

If collaborative scheduling uses a multi-threaded system with collaborative scheduling, the execution time of the thread is controlled by the thread itself. after the thread has finished executing its work, the thread should actively inform the system to switch to another thread. The greatest advantage of collaborative multithreading is that it is easy to implement, and because the thread will not switch threads until it has finished its own work, the switching operation is known to the thread itself, so there is no problem of thread synchronization. The "collaborative routines" in the Lua language are such implementations. Its disadvantages are also obvious: the thread execution time is uncontrollable, and even if a thread has a writing problem and does not tell the system to switch threads, the program will always be blocked there. A long time ago, the Windows 3.x system used collaboration to achieve multi-process and multi-tasking, which was so unstable that a process insisting on not giving up CPU execution time could lead to a crash of the whole system.

Preemptive scheduling

If you use a multithreaded system with preemptive scheduling, each thread will be assigned execution time by the system, and thread switching is not determined by the thread itself (in Java, Thread.yield () can give up execution time, but there is no way for the thread itself to get the execution time).

In this way of thread scheduling, the execution time of the thread can be controlled by the system, and there is no problem that a thread will block the whole process.

The thread scheduling method used by Java is preemptive scheduling. It is possible to provide Coroutines for multitasking in future versions of JDK.

In contrast to the previous example of Windows 3.x, preemptive implementation of multiple processes is used in the Windows 9x/NT kernel, and when a process goes wrong, we can also use the task manager to "kill" the process without causing the system to crash.

Thread priority

Although Java thread scheduling is done automatically by the system, we can "advise" the system to allocate a little more execution time to some threads and less to others-- this can be done by setting thread priority.

Java language sets a total of 10 levels of thread priority (Thread.MIN_PRIORITY to Thread.MAX_PRIORITY). When two threads are in the Ready state at the same time, the higher the priority thread is, the easier it is for the system to choose and execute.

However, thread priority is not very reliable, because Java threads are mapped to the system's native threads, so thread scheduling ultimately depends on the operating system. Although many operating systems provide the concept of thread priority, it does not necessarily correspond to the priority of Java threads.

For example, there are 2147483648 priorities in Solaris, but only 7 in Windows. Systems with more priority than Java threads are fine, leaving a little space in the middle, but systems with fewer priorities than Java threads have to have several cases with the same priority.

The figure above shows the correspondence between Windows thread priorities and Java thread priorities. The other six thread priorities except THREAD_PRIORITY_IDLE are used in the JDK of the Windows platform. Java thread status

The Java language defines five thread states. At any point in time, a thread can only have one or only one of these states, which are as follows:

New (New): threads that have not been started after creation are in this state.

Runable: Runable includes Running and Ready in the operating system thread state, that is, a thread in this state may be executing or waiting for CPU to allocate execution time for it.

Waiting: threads in this state are not allocated CPU execution time, they wait to be explicitly woken up by other threads.

The following methods leave the thread waiting indefinitely: the Object.wait () method that does not set the Timeout parameter. There is no Thread.join () method that sets the Timeout parameter. The LockSupport.park () method.

Timed Waiting: threads in this state are not allocated CPU execution time, but do not have to wait to be explicitly woken up by other threads, they will be automatically woken up by the system after a certain period of time.

The following method puts the thread into a waiting state of deadline: the Thread.sleep () method. The Object.wait () method with the Timeout parameter set. The Thread.join () method with the Timeout parameter set. The LockSupport.parkNanos () method. The LockSupport.parkUntil () method.

Blocked: the thread is blocked. The difference between the blocking state and the wait state is that the blocking state is waiting for an exclusive lock to be acquired, which will occur when another thread gives up the lock, while the wait state is waiting for a period of time or waking up to occur. The thread enters this state while the program is waiting to enter the synchronization area.

Terminated: the thread state of a terminated thread, and the thread has ended execution.

Java multithreading implementation

Implementation 1: inherit the Thread class

/ / inherit Threadpublic class MyThread extends Thread {@ Override public void run () {System.out.println ("MyThread run...");}}

Implementation 2: implement the Runnable interface

Public class MyRunnable implements Runnable {@ Override public void run () {System.out.println ("MyRunnable run...");}}

Implementation 3: implement the Callable interface and get the asynchronous return value using FutureTask

Public static void main (String [] args) throws ExecutionException, InterruptedException {class MyCallable implements Callable {@ Override public String call () throws Exception {return "MyCallable";}} FutureTask task = new FutureTask (new MyCallable ()); Thread c = new Thread (task); c.start (); System.out.println (task.get ());}

Implement versions above 4:JDK8 that use CompletableFuture for asynchronous computing.

In Java8, it provides a very powerful extension of Future, which can help us simplify the complexity of asynchronous programming, and provide the ability of functional programming to process calculation results through callbacks, as well as ways to transform and combine CompletableFuture.

Public class CompletableFutureTest {public static void main (String [] args) {ExecutorService threadPool = Executors.newFixedThreadPool (2); / / CompletableFuture CompletableFuture futureTask = CompletableFuture.supplyAsync (new Supplier () {@ Override public String get () {System.out.println ("task start"); try {Thread.sleep (10000);} catch (Exception e) {e.printStackTrace () Return "execute failure";} System.out.println ("task end"); return "execute success";}}, threadPool); / / get the execution result of futureTask asynchronously, where the code can be put together with other process codes futureTask.thenAccept (e-> System.out.println ("future task result:" + e)); System.out.println ("main thread end");}}

Output: task startmain thread endtask endfuture task result:execute success

Implementation 5: use thread pool, ThreadPoolExecutor class.

Public ThreadPoolExecutor (int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue, ThreadFactory threadFactory) {this (corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, defaultHandler);} Future submit (Callable task); Future submit (Runnable task) At this point, the study of "what is the implementation principle of system threads" is over. I hope to be able to solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report