Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of libuv event polling in node.js

2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you the "sample analysis of libuv event polling in node.js", which is easy to understand and clear. I hope it can help you solve your doubts. Let me lead you to study and learn the article "sample Analysis of libuv event polling in node.js".

When it comes to Node.js, I believe most front-end engineers will think of developing the server based on it, and only need to master JavaScript to become a full-stack engineer, but in fact, the meaning of Node.js is more than that.

In many high-level languages, execution permissions can reach the operating system, with the exception of JavaScript, which runs on the browser side, where browsers create a sandbox environment that encloses front-end engineers in an ivory tower of the programming world. However, the emergence of Node.js makes up for this deficiency, and front-end engineers can also reach the bottom of the computer world.

So the significance of Nodejs for front-end engineers is not only to provide full-stack development capabilities, but also to open a door to the underlying world of computers for front-end engineers. This paper opens this door by analyzing the implementation principle of Node.js.

Node.js source code structure

There are more than a dozen dependencies in the / deps directory of the Node.js source code repository, including modules written in C (such as libuv, V8) and modules written in JavaScript (such as acorn, acorn-plugins), as shown in the following figure.

Acorn-plugins:acorn 's extension module, which lets acorn support ES6 feature parsing, such as class declaration.

Brotli compression algorithm written in brotli:C language.

Cares: should be written as "c-ares", written in C to handle asynchronous DNS requests.

ICU (International Components for Unicode) library written in icu-small:C language and customized for Node.js, including some functions for manipulating Unicode.

Llhttp:C language, lightweight http parser.

Nghttp2/nghttp3/ngtcp2: handles HTTP/2, HTTP/3, TCP/2 protocols.

Node.js module manager written by npm:JavaScript.

Openssl:C language writing, encryption-related modules, in tls and crypto modules are used.

Written in uv:C language, it adopts the non-blocking operation of iUniver O, which provides Node.js with the ability to access system resources.

Written in uvwasi:C language to realize WASI system call API.

Zlib: for fast compression, Node.js uses zlib to create synchronous, asynchronous, and data stream compression and decompression interfaces.

The most important of these are the modules corresponding to the V8 and uv directories. V8 itself does not have the ability to run asynchronously, but with the help of other threads in the browser, which is why we often say that js is single-threaded because its parsing engine only supports synchronous parsing code. But in Node.js, asynchronous implementation mainly depends on libuv, so let's focus on the implementation principle of libuv.

What is libuv?

Libuv is an asynchronous Imax O library written in C language that supports multiple platforms, which mainly solves the problem of blocking caused by Imax O operations. At first, it was developed specifically for Node.js, but it was also used by other modules such as Luvit, Julia, pyuv, and so on. The following figure is the structure diagram of libuv.

There are two ways to implement libuv asynchronously, which are the left and right parts of the above image selected by the yellow box.

The left part is the network IMab O module, which has different implementation mechanisms under different platforms. Linux system is implemented through epoll, OSX and other BSD systems adopt KQueue,SunOS system and Event ports,Windows system adopts IOCP system. As it involves the underlying API of the operating system, it is more complex to understand, so I won't cover it here.

The right section includes the file Iram O module, the DNS module, and the user code, which implement asynchronous operations through the thread pool. Libuv does not depend on the underlying API of the system, but executes blocked file Imax O operations in the global thread pool.

Event polling in libuv

The following figure is the event polling work flow chart given on the libuv official website, which we analyze together with the code.

The core code of the libuv event loop is implemented in the uv_run () function, and here is some of the core code under the Unix system. Although it is written in C, it is a high-level language like JavaScript, so it is not too difficult to understand. The biggest difference may be between asterisks and arrows, which we can ignore directly. For example, uv_loop_t* loop in a function parameter can be understood as a variable of type uv_loop_t loop. The arrow "→" can be understood as a period "." for example, loop → stop_flag can be understood as loop.stop_flag.

Int uv_run (uv_loop_t* loop, uv_run_mode mode) {... R = uv__loop_alive (loop); if (! r) uv__update_time (loop); while (r! = 0 & & loop-> stop_flag = = 0) {uv__update_time (loop); uv__run_timers (loop); ran_pending = uv__run_pending (loop); uv__run_idle (loop); uv__run_prepare (loop);... uv__io_poll (loop, timeout) Uv__run_check (loop); uv__run_closing_handles (loop);...}

Uv__loop_alive

This function is used to determine whether event polling continues, and returns 0 and exits the loop if there is no active task in the loop object.

In C language, this "task" has a professional name, "handle", which can be understood as a variable pointing to the task. Handles can be divided into two categories: request and handle, which represent short lifecycle handles and long lifecycle handles, respectively. The specific code is as follows:

Static int uv__loop_alive (const uv_loop_t * loop) {return uv__has_active_handles (loop) | | uv__has_active_reqs (loop) | | loop-> closing_handles! = NULL;}

Uv__update_time

In order to reduce the number of time-related system calls, isomorphic this function to cache the current system time, with high precision, can reach the nanosecond level, but the unit is still millisecond.

The specific source codes are as follows:

UV_UNUSED (static void uv__update_time (uv_loop_t * loop)) {loop-> time = uv__hrtime (UV_CLOCK_FAST) / 1000000;}

Uv__run_timers

Executes the callback function of the arrival time threshold in setTimeout () and setInterval (). This execution is done through for loop traversal, and you can see in the following code that the timer callback is stored in the data of a minimum heap structure and exits the loop when the minimum heap is empty or before the time threshold has been reached.

Remove the timer before executing the timer callback function. If repeat is set, add it to the minimum heap again, and then execute the timer callback.

The specific code is as follows:

Void uv__run_timers (uv_loop_t * loop) {struct heap_node * heap_node; uv_timer_t * handle; for (;;) {heap_node = heap_min (timer_heap (loop)); if (heap_node = = NULL) break; handle = container_of (heap_node, uv_timer_t, heap_node); if (handle-> timeout > loop-> time) break Uv_timer_stop (handle); uv_timer_again (handle); handle-> timer_cb (handle);}}

Uv__run_pending

Iterates through all the callback functions stored in pending_queue and returns 0 when pending_queue is empty; otherwise, 1 is returned after executing the callback function in pending_queue.

The code is as follows:

Static int uv__run_pending (uv_loop_t * loop) {QUEUE * q; QUEUE pq; uv__io_t * w; if (QUEUE_EMPTY (& loop-> pending_queue)) return 0; QUEUE_MOVE (& loop-> pending_queue, & pq); while (! QUEUE_EMPTY (& pq)) {Q = QUEUE_HEAD (& pq); QUEUE_REMOVE (Q) QUEUE_INIT (Q); w = QUEUE_DATA (Q, uv__io_t, pending_queue); w-> cb (loop, w, POLLOUT);} return 1;}

Uvrun_idle / uvrun_prepare / uv__run_check

All three functions are defined by a macro function UV_LOOP_WATCHER_DEFINE, which can be understood as a code template, or a function used to define a function. The macro function is called three times and the name parameter values prepare, check and idle are passed, and uvrun_idle, uvrun_prepare and uv__run_check functions are defined at the same time.

So their execution logic is consistent, they all loop through and fetch the objects in the queue loop- > name##_handles according to the first-in-first-out principle, and then execute the corresponding callback function.

# define UV_LOOP_WATCHER_DEFINE (name, type) void uv__run_##name (uv_loop_t* loop) {uv_##name##_t* h; QUEUE queue; QUEUE* Q; QUEUE_MOVE (& loop- > name##_handles, & queue); while (! QUEUE_EMPTY (& queue)) {Q = QUEUE_HEAD (& queue); h = QUEUE_DATA (Q, uv_##name##_t, queue) QUEUE_REMOVE (Q); QUEUE_INSERT_TAIL (& loop- > name##_handles, Q); h-> name##_cb (h);} UV_LOOP_WATCHER_DEFINE (prepare, PREPARE) UV_LOOP_WATCHER_DEFINE (check, CHECK) UV_LOOP_WATCHER_DEFINE (idle, IDLE)

Uv__io_poll

Uv__io_poll is mainly used to poll for iCompo operations. The specific implementation will vary according to the operating system, we take the Linux system as an example for analysis.

There are many source codes of uv__io_poll function, and the core is two pieces of loop code, some of which are as follows:

Void uv__io_poll (uv_loop_t * loop, int timeout) {while (! QUEUE_EMPTY (& loop-> watcher_queue)) {Q = QUEUE_HEAD (& loop-> watcher_queue); QUEUE_REMOVE (Q); QUEUE_INIT (Q); w = QUEUE_DATA (Q, uv__io_t, watcher_queue); e.events = w-> pevents; e.data.fd = w-> fd If (w-> events = 0) op = EPOLL_CTL_ADD; else op = EPOLL_CTL_MOD; if (epoll_ctl (loop-> backend_fd, op, w-> fd, & e)) {if (errno! = EEXIST) abort (); if (epoll_ctl (loop-> backend_fd, EPOLL_CTL_MOD, w-> fd, & e) abort () } w-> events = w-> pevents;} for (;;) {for (I = 0; I

< nfds; i++) { pe = events + i; fd = pe - >

Data.fd; w = loop-> watchers [fd]; pe-> events & = w-> pevents | POLLERR | POLLHUP; if (pe-> events = = POLLERR | | pe-> events = = POLLHUP) pe-> events | = w-> pevents & (POLLIN | POLLOUT | UV__POLLRDHUP | UV__POLLPRI); if (pe-> events! = 0) {if (w = & loop-> signal_io_watcher) have_signals = 1 Else w-> cb (loop, w, pe-> events); nevents++;}} if (have_signals! = 0) loop-> signal_io_watcher.cb (loop, & loop-> signal_io_watcher, POLLIN);}.}

In the for loop, the file descriptor waiting in epoll is first taken out and assigned to nfds, and then iterated through nfds to execute the callback function.

Uv__run_closing_handles

Iterate through the queues waiting to be closed, close handle such as stream, tcp, udp, etc., and then call the close_cb corresponding to handle. The code is as follows:

Static void uv__run_closing_handles (uv_loop_t * loop) {uv_handle_t * p; uv_handle_t * Q; p = loop-> closing_handles; loop-> closing_handles = NULL; while (p) {Q = p-> next_closing; uv__finish_close (p); p = Q;}} process.nextTick and Promise

So when we use these two asynchronous API, we should be aware that if we execute the long task or recursion in the incoming callback function, it will cause event polling to be blocked and starve to death.

The following code is an example of a callback function of fs.readFile that cannot be executed by calling prcoess.nextTick recursively.

Fs.readFile ('config.json', (err, data) = > {...}) const traverse = () = > {process.nextTick (traverse)}

To solve this problem, you can use setImmediate instead, because setImmediate executes the callback function queue in event polling. The priority of the process.nextTick task queue is higher than that of the Promise task queue. For specific reasons, see the following code:

Function processTicksAndRejections () {let tock; do {while (tock = queue.shift ()) {const asyncId = tock [async _ id_symbol]; emitBefore (asyncId, tock [trigger _ async_id_symbol], tock); try {const callback = tock.callback; if (tock.args = = undefined) {callback () } else {const args = tock.args; switch (args.length) {case 1: callback (args [0]); break; case 2: callback (args [0], args [1]) Break; case 3: callback (args [0], args [1], args [2]); break; case 4: callback (args [0], args [1], args [2], args [3]) Break; default: callback (.. ARGs);} finally {if (destroyHooksExist ()) emitDestroy (asyncId);} emitAfter (asyncId);} runMicrotasks ();} while (! Queue. IsEmpty () | | processPromiseRejections (); setHasTickScheduled (false); setHasRejectionToWarn (false);}

As you can see from the processTicksAndRejections () function, the callback function of the queue queue is first fetched through the while loop, and the callback function in this queue queue is added through process.nextTick. When the while loop ends, the runMicrotasks () function is called to execute the callback function of Promise.

The above is all the contents of the article "sample Analysis of libuv event polling in node.js". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report