Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the use of Goroutine and Synergetics in Go

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the relevant knowledge of "what is the use of Goroutine and Cooperative process in Go language". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

1. A brief introduction to Cooperative Program

Everyone is very familiar with processes and threads, but the cooperative program is not popular. It is not a unique mechanism of the Go language. On the contrary, it is also supported by Lua, Ruby, Python, Kotlin, Cpassword + and so on. The difference is that some are supported from the language level, and some are supported through the plug-in class library. The Go language is supported at the native language level, and this article also understands the collaborative process from the perspective of Go.

1.1 basic concepts and proposers of collaborative processes

Coroutine is translated into collaborative program in English.

A collaborative program is a computer program component that allows execution to be paused and resumed so that it can be used as a general non-preemptive multitasking subroutine.

Collaborative programs are ideal for implementing familiar program components such as collaborative tasks, exceptions, event loops, iterators, pipes, and so on.

According to Donald Knut, Melvin Conway applied the term Coroutine to the construction of assembly programs in 1958, and it was not until 1963 that he first published a paper on Coroutine.

1.2 comparison between co-programming and incoming threads

Let's review some of the basic features of threading and collaboration:

Process is the smallest unit of system resource allocation, including text segment text region, data segment data region, stack segment stack region and so on. The creation and destruction of a process is at the system resource level, so it is a relatively expensive operation. The process is preemptively scheduled and has three states: waiting state, ready state, and running state. Processes are isolated from each other, and they each have their own system resources, which is more secure, but there is also the problem of inconvenient communication between processes.

A process is a carrier container for threads, and multiple threads not only share the resources of the process, but also have a small part of their own independent resources, so it is lighter than the process, and the communication between multiple threads in the process is easier than the process. but it also brings the problems of synchronization and mutual exclusion and thread safety, although multi-thread programming is still the mainstream of current server programming. Threads are also the smallest unit of CPU scheduling. Thread switching occurs when multi-threads are running, and the state transition is shown in the figure:

The co-program is called micro-thread or user-mode lightweight thread in some data. The co-program scheduling does not need the kernel participation but is completely determined by the user-mode program, so the co-program is imperceptible to the system. When the cooperative program is controlled by the user mode, there is no forced CPU control right to switch to other incoming threads like preemptive scheduling, multiple cooperative programs carry out collaborative scheduling, and other collaborative programs can only be executed after the cooperative program transfers the control itself actively, which avoids the system switching overhead and improves the use efficiency of CPU.

A simple comparison between preemptive scheduling and collaborative scheduling:

Seeing here, we can't help thinking: collaborative scheduling has more advantages, so why does preemptive scheduling always prevail? If we continue to study together, we may be able to answer this question.

1.3 We in actual work

The factor that we often need to consider when writing programs is to improve machine utilization, which is very easy to understand. Of course, there is often a tradeoff between machine utilization and development efficiency maintenance costs. In plain English, it is either a waste of manpower or a machine. Choose one.

The most expensive machine cost is CPU. Programs are generally divided into CPU-intensive and IO-intensive. For CPU-intensive, we may not have so much room for optimization, but for IO-intensive, there is a lot of room for optimization. Imagine how bad our program is always waiting for IO to let CPU sleep.

In order to improve the CPU utilization of IO-intensive programs, we try multi-process / multi-thread programming to make multiple tasks run and share together to reuse preemptive scheduling, which improves the utilization of CPU. However, due to the scheduling switching of multiple incoming threads, there is also a certain resource consumption, so the number of incoming threads cannot be increased indefinitely.

Most of the programs we write now are synchronous IO, and the efficiency is not high enough, so there are some asynchronous IO frameworks, but asynchronous frameworks are more difficult to program than synchronous frameworks, but it is undeniable that async is a good optimization direction. Let's take a look at synchronous IO and asynchronous IO.

Synchronization means that the application needs to wait or poll the kernel IWeiO operation before it can continue to execute after the application initiates the Ithumb O request. Async means that the application continues to execute after the application initiates the Ithumb O request, and the application is notified or the application registered callback function is called when the kernel Ithumb O operation is completed.

Let's take the server-side program developed by CumberCraft + as an example, Linux's asynchronous IO appears relatively late, so there is still a lot of territory for IO reuse technologies such as epoll, but the efficiency of synchronous IO is not as efficient as asynchronous IO, so the current optimization directions include: asynchronous IO framework (such as boost.asio framework) and cooperative program solution (Tencent libco).

2.Go and Cooperative Program

We know that the Coroutine,Go language natively supports the protocol at the language level and calls it Goroutine, which is also an important support for the strong concurrency ability of the Go language. The CSP concurrency model of Go is implemented through Goroutine and channel, and we will write about the CSP concurrency model later.

2.1 Collaborative scheduling and Scheduler

In collaborative scheduling, user-mode cooperative programs will actively give up the control of CPU to be used by other cooperative programs, which does improve the utilization of CPU, but can't help thinking about what to do if the user-mode cooperative programs are not intelligent enough. I don't know when to relinquish control or when to resume execution.

Read here suddenly understand the advantages of preemptive scheduling, in preemptive scheduling is completed by the system kernel, the user state does not need to participate, and kernel participation makes the platform transplant well, in the final analysis, each has its own advantages!

In order to solve this problem, we need an intermediate layer to schedule these co-programs, so that thousands of co-programs in user mode can run in a stable and orderly manner. Let's call this middle layer a user-mode co-program scheduler.

2.2 Scheduler model for Goroutine and Go

The Go language has been developed for 12 years since the end of 2007, and the scheduler of Go is not achieved overnight. In the first few versions, the scheduler of Go is also very crude and can not support large concurrency.

After several versions of iteration and optimization, there has been excellent performance so far, but let's review the development of Go scheduler.

Go scheduler is very complex, space is limited this article only mentioned some basic concepts and principles, followed by in-depth expansion of the Go scheduler.

Several recent versions of Go scheduler use the GPM model, and there are a few concepts to take a look at first:

The GPM model uses a MVR N scheduler to schedule any number of coprograms to run in any number of system threads, which ensures the speed of context switching and uses multi-cores, but increases the complexity of the scheduler.

The simple process of scheduling the whole GPM is as follows:

The newly created Goroutine is first stored in the Global global queue, waiting for the Go scheduler to schedule, and then the Goroutine is assigned to one of the logic processors, P, and placed in the Local local running queue corresponding to this logic processor, and finally waits for execution by the logic processor P.

After M is bound with P, M will continue to extract G from P's Local queue without lock and switch to G's stack execution. When there is no G in P's Local queue, it will get another G from Global queue. When there is no G in Global queue, it will try to steal some G from other P to perform load balancing between P.

This is the end of the content of "what is the use of Goroutine and Cooperative process in Go language". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report