Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Go to develop concurrent programs

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces how to use Go to develop concurrent programs, the article is very detailed, has a certain reference value, interested friends must read it!

We all know that the core of the computer is CPU, which is the core of computer operation and control, and carries all the computing tasks. In the last half century, due to the rapid development of semiconductor technology, the number of transistors in integrated circuits has also increased greatly, which has greatly improved the performance of CPU. The famous Moore's Law, "the number of circuits integrated on integrated circuits doubles every 18 months", describes this situation.

Although too dense transistors improve the processing performance of CPU, it also brings the problem of high heat and high cost of a single chip. at the same time, due to the development of material technology, the increasing rate of transistor density in the chip has slowed down. In other words, the program can no longer simply rely on the improvement of hardware to improve its running speed. At this time, the emergence of multi-core CPU makes us see another direction to improve the running speed of the program: divide the execution process of the program into several parallel or concurrent execution steps, let them be executed in different CPU cores at the same time, and finally merge the execution results of each part to get the final result.

Parallelism and concurrency are common concepts in computer program execution, and their differences are as follows:

Parallelism means that two or more programs execute at the same time

Concurrency means that two or more programs are executed at the same time.

Programs executed in parallel, no matter from a macro or micro point of view, there are multiple programs executed in CPU at the same time. This requires CPU to provide multi-core computing power, and multiple programs are assigned to different cores of CPU to be executed simultaneously.

On the other hand, programs executed concurrently only need to be observed from a macro point of view that multiple programs are executed in CPU at the same time. Even single-core CPU can allocate certain execution time slices to multiple programs by time-sharing reuse, so that they can be rotated quickly on CPU, thus simulating the effect of multiple programs executing at the same time macroscopically. But from a micro point of view, these programs are actually executed serially in CPU.

MPG Thread Model of Go

Go is considered to be a high-performance concurrent language because it supports concurrent programs in its native environment. Here we first understand the connections and differences among processes, threads, and collaborators.

In a multiprogramming system, a process is a dynamic execution process of a data set by a program with independent functions, the basic unit of resource allocation and scheduling by the operating system, and the carrier of application programs.

Thread is a single sequence control flow in the process of program execution, and it is the basic unit of CPU scheduling and dispatching. A thread is a smaller basic unit of independent operation than a process. There can be one or more threads in a process. These threads share the resources held by the process, are scheduled to execute in CPU, and complete the execution tasks of the process together.

In the Linux system, according to the different access rights of resources, the operating system divides the memory space into kernel space and user space: the code in kernel space can directly access the underlying resources of the computer, such as CPU resources, Imax O resources, etc., providing the computer's underlying resource access ability for the code in user space. The user space is the activity space of the upper application, so it is impossible to access the underlying resources of the computer directly, so it is necessary to call the resources provided by the kernel space with the help of "system call", "library function" and so on.

Similarly, threads can be divided into kernel threads and user threads. Kernel thread, which is managed and scheduled by the operating system, is a kernel scheduling entity, which can directly operate the underlying resources of the computer and can make full use of the advantages of CPU multi-core parallel computing. However, thread switching requires CPU to switch to kernel state, so there is a certain overhead, and the number of threads that can be created is also limited by the operating system. User threads are created, managed and scheduled by user space code and cannot be perceived by the operating system. The data of the user thread is saved in the user space, there is no need to switch to the kernel state when switching, the switching overhead is small and efficient, and the number of threads that can be created is only related to the memory size in theory.

A co-program is a kind of user thread, which is a lightweight thread. The scheduling of the co-program is completely controlled by the code in the user space; the co-program has its own register context and stack and is stored in the user space; when the co-program is switched, there is no need to switch to the kernel state to access the kernel space, so the switching speed is very fast. However, it also brings great technical challenges to developers: developers need to save and restore context information when dealing with cooperative program switching in user space, the management of stack space size and so on.

Go is one of the few languages that implement co-programming concurrency at the language level. It adopts a special two-level threading model: MPG threading model (see figure below).

MPG thread model

M, that is, machine, is equivalent to the mapping of kernel threads in Go processes. It corresponds to kernel threads one by one and represents the resources that actually perform calculations. During the lifetime of M, it is associated with only one kernel thread.

P, or processor, represents the context required for Go code snippet execution. The combination of M and P can provide an effective running environment for G, and the combination relationship between them is not fixed. The maximum number of P determines the concurrency scale of Go programs, which is determined by the runtime.GOMAXPROCS variable.

G, or goroutine, is a lightweight user thread that encapsulates code fragments and has information such as stack, state, and code snippets at execution time.

In the actual execution process, M and P work together to provide an effective running environment for G (as shown in the following figure). A number of executable G are sequentially mounted under P's executable G queue, waiting for scheduling and execution. When there are some Iamp O system calls in G blocking M, P will disconnect from M, get an M from the scheduler idle M queue or create a new M combination execution to ensure that the other G in the executable G queue in P is executed, and because the number of parallel execution M in the program remains unchanged, the high utilization of the program CPU is guaranteed.

Schematic diagram of M and P combination

When the system call in G returns from the end of execution, M will capture a P context for G, and if the capture fails, it will put G into the global executable G queue and wait for the acquisition of other P. The newly created G is placed in the global executable G queue, waiting for the scheduler to be distributed to the appropriate P executable G queue. When M and P are combined, G execution is acquired from P's executable G queue without lock. When the executable G queue of P is empty, P acquires G from the global executable G queue with lock. When there is no G in the global executable G queue, P attempts to "plagiarize" G execution from the other P's executable G queue.

Goroutine and channel

Multiple threads in concurrent programs execute in CPU at the same time. Because of the interdependence of resources and race conditions, a certain concurrency model is needed to cooperate with the task execution between different threads. Go advocates the use of CSP concurrency model to control the task cooperation between threads, and CSP advocates the use of communication for memory sharing between threads.

Go implements the CSP concurrency model through goroutine and channel:

Goroutine, the concurrent entity in Go, is a lightweight user thread that sends and receives messages.

Channel, or channel, which goroutine uses to send and receive messages.

The CSP concurrency model is similar to the commonly used synchronous queue, it pays more attention to the transmission mode of the message, decoupling the goroutine that sends the message and the goroutine,channel that receives the message can be created and accessed independently, and transmitted and used in different goroutine.

Using the keyword go, you can use goroutine to execute code snippets concurrently, as follows:

Go expression

Channel, as a reference type, needs to specify the transmission data type when declaring. The declaration form is as follows:

Var name chan T / / two-way channel var name chan

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report