Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of Cluster in Node

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article shares with you the content of the sample analysis of clusters in Node. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

I. introduction

Node directly introduces the cluster module in v0.8 to solve the utilization problem of multicore CPU, and also provides a more perfect API to deal with the robustness of the process.

The cluster module calls the fork method to create the child process, which is the same method as fork in child_process. Cluster module uses the classic master-slave model, cluster will create a master, and then copy multiple child processes according to the number you specify, you can use the cluster.isMaster attribute to determine whether the current process is master or worker (worker process). All the sub-processes are managed by the master process. The main process is not responsible for the specific task processing, but is responsible for scheduling and management.

The cluster module uses built-in load balancing to better handle pressure between threads, which uses the Round-robin algorithm (also known as the round robin algorithm). When using the Round-robin scheduling policy, master accepts () all incoming connection requests, and then send the corresponding TCP request processing to the selected worker process (which still communicates through IPC). The official examples of use are as follows

Const cluster = require ('cluster'); const cpuNums = require (' os'). Cpus (). Length;const http = require ('http'); if (cluster.isMaster) {for (let I = 0; I)

< cpuNums; i++){ cluster.fork(); } // 子进程退出监听 cluster.on('exit', (worker,code,signal) =>

{console.log ('worker process died,id',worker.process.pid)})} else {/ / Mark the child process name process.title = `child process ${process.pid} `; / / Worker can share the same TCP connection, here is a http server http.createServer ((req, res) = > {res.end (`response from worker ${process.pid}`);}) .process (3000) Console.log (`Worker ${process.pid} started`);}

In fact, the cluster module is a combination of child_process and net modules. When cluster starts, it starts the TCP server internally, and sends the file descriptor of the TCP server-side socket to the worker process when the cluster.fork () child process. If the worker process is copied through cluster.fork (), then NODE_UNIQUE_ID exists in its environment variables. If there is a listen () listening for the call of the network port in the worker process, it will get the file descriptor and reuse it through the SO_REUSEADDR port, thus realizing multiple child processes sharing the port.

II. Cluster event

Fork: triggers this event after copying a worker process

Online: after a worker process is copied, the worker process actively sends an online message to the main process. After the main process receives the message, it triggers the event.

Listening: after calling listen () (sharing server-side Socket) in the working process, send a listening message to the main process. After receiving the message, the main process triggers the event.

Disconnect: this event is triggered when the IPC channel between the main process and the worker process is disconnected

Exit: this event will be triggered when a worker process exits

This event is triggered after the execution of setup:cluster.setupMaster ()

Most of these events are related to the events of the child_process module and are encapsulated on the basis of inter-process message delivery.

Cluster.on ('fork', () = > {console.log (' fork event...');}) cluster.on ('online', () = > {console.log (' online event...');}) cluster.on ('listening', () = > {console.log (' listening event...');}) cluster.on ('disconnect', () = > {console.log (' disconnect event...')) }) cluster.on ('exit', () = > {console.log (' exit event...');}) cluster.on ('setup', () = > {console.log (' setup event...');}) III. Master communicates with worker

As you can see from the above, the master process creates the worker process through cluster.fork (). In fact, cluster.fork () internally creates child processes through child_process.fork (). In other words, master and the worker process are parent and child processes; like the parent-child processes created by child_process, they communicate through the IPC channel.

The full name of IPC is Inter-Process Communication, that is, inter-process communication. The purpose of inter-process communication is to enable different processes to access resources and coordinate work with each other. The IPC channel in Node is realized by pipe technology, which is provided by libuv, implemented by named pipe (named pipe) under Windows, and implemented by Unix Domain Socket in * nix system. The inter-process communication realized in the application layer only has simple message events and send methods, which is very easy to use.

Before the parent process actually creates the child process, it creates the IPC channel and listens to it, and then it actually creates the child process and tells the child process the file descriptor of the IPC channel through the environment variable (NODE_CHANNEL_FD). During the startup process, the child process connects the existing IPC channel according to the file descriptor, thus completing the connection between the parent and child processes.

After the connection is established, the parent-child process can communicate freely. Because IPC channels are created with named pipes or Domain Socket, they behave similar to network socket and are two-way communication. The difference is that they complete the communication between processes in the system kernel without going through the actual network layer, which is very efficient. In Node, the IPC channel is abstracted as a Stream object, sends data (similar to write) when calling send, and the received message is triggered to the application layer by a message event (similar to data).

The master and worker processes communicate through the IPC channel during the creation of the server instance. Will that interfere with our development? For example, getting a bunch of messages that you don't really need to care about? The answer must be no? So how do you do it?

Node introduces the function of sending handles between processes. The send method can send handles in addition to sending data through IPC. The second parameter is the handle, as shown below.

Child.send (meeage, [sendHandle])

A handle is a reference that can be used to identify a resource, and its interior contains a file descriptor that points to the object. For example, a handle can be used to identify a server-side socket object, a client-side socket object, a UDP socket, a pipe, and so on. So is there any difference between sending handles and sending server objects directly to child processes? Does it really send server objects to child processes?

In fact, before sending the message to the IPC pipeline, the send () method assembles the message into two objects, one parameter is handle, and the other is the message,message parameter as follows

{cmd: 'NODE_HANDLE', type:' net.Server', msg: message}

What is sent to the IPC pipe is actually the handle file descriptor to be sent, which is an integer value. This message object is serialized through JSON.stringify and converted to a string when written to the IPC pipeline. The child process reads the message sent by the parent process by connecting the IPC channel and restores the string to an object through JSON.parse parsing before triggering the message event to pass the message body to the application layer for use. In this process, the message object is also filtered. If the value of message.cmd is prefixed with NODE_, it will respond to an internal event internalMessage. If the value of message.cmd is NODE_HANDLE, it will take out the message.type value and restore a corresponding object together with the resulting file descriptor. The schematic diagram of this process is shown below

In cluster, take the worker process telling the master process to create a server instance as an example. The worker pseudo code is as follows:

/ / woker process const message = {cmd: 'NODE_CLUSTER', type:' net.Server', msg: message}; process.send (message)

The master pseudo code is as follows:

Worker.process.on ('internalMessage', fn); fourth, how to realize port sharing

In the previous example, the server created in multiple woker listens on the same port 3000. Generally speaking, multiple processes listen on the same port and the system reports an EADDRINUSE exception. Why is cluster okay?

Because the file descriptors of socket sockets on the TCP server side are not the same in independently started processes, an exception is thrown when listening on the same port. But for services restored by the handle sent by send (), their file descriptors are the same, so listening on the same port does not cause an exception.

It should be noted that when multiple applications listen on the same port, the file descriptor can only be used by a process at a time. In other words, when a network request is sent to the server, only one lucky process can grab the connection. In other words, only it can serve the request, and these process services are preemptive.

5. How to distribute requests to multiple worker

Whenever the worker process creates a server instance to listen for requests, it registers on the master through the IPC channel. When the client request arrives, the master is responsible for forwarding the request to the corresponding worker

Which worker is it forwarded to? This is determined by the forwarding policy, which can be set through the environment variable NODE_CLUSTER_SCHED_POLICY or passed in cluster.setupMaster (options). The default forwarding policy is polling (SCHED_RR).

When a customer request arrives, master polls the worker list to find the first free worker, and then forwards the request to the worker

VI. Working principle of pm2

Pm2 is a node process management tool, which can be used to simplify many tedious tasks of node application management, such as performance monitoring, automatic restart, load balancing and so on.

Pm2 itself is encapsulated based on the cluster module. In this section, we focus on the Satan process of pm2, the God Daemon daemon, and the remote call RPC between the two processes.

Satan, which mainly refers to the fallen angel in the Bible (also known as the fallen angel Satan), is regarded as the source of evil and darkness as opposed to the power of God and the opposite of God.

Among them, Satan.js provider exit, kill and other methods, God.js is responsible for maintaining the normal operation of the process, the God process has been running after starting, which is equivalent to the Master process in cluster, maintaining the normal operation of the worker process.

RPC (Remote Procedure Call Protocol) refers to the remote procedure call, that is to say, two servers A Magi B, an application deployed on server A, want to call the functions / methods provided by the application on server B, because it is not in a memory space, can not be called directly, need to express the semantics of the call and convey the data of the call through the network. Method calls between different processes on the same machine also fall within the scope of rpc. The execution process is as follows

The satan program is executed once for each command line input, and if the God process is not running, the God process needs to be started first. Then, according to the instruction, Satan calls the corresponding method in God through rpc to execute the corresponding logic.

Taking pm2 start app.js-I 4 as an example, God configures cluster when it executes for the first time and listens for events in cluster:

/ / configure clustercluster.setupMaster ({exec: path.resolve (path.dirname (module.filename), 'ProcessContainer.js')}); / / listen for cluster events (function initEngine () {cluster.on (' online', function (clu) {/ / worker process is executing God.clusters_ DB [clu. Pm_id]. Status = 'online';})) / / kill pid on the command line will trigger the exit event, and process.kill will not trigger the exit cluster.on ('exit', function (clu, code, signal) {/ / restart process if the restart times are too frequent, it will be directly marked as stopped God.clusters_ DB [clu. Pm_id]. Status =' starting'; / / logic /.});})

After God starts, the rpc link between Satan and God is established, then the prepare method is called, and the prepare method calls cluster.fork to complete the cluster startup.

God.prepare = function (opts, cb) {/ /. Return execute (opts, cb);}; function execute (env, cb) {/ /. Var clu = cluster.fork (env); / / God.clusters_ DB [id] = clu; clu.once ('online', function () {God.clusters_ DB [id]. Status =' online'; if (cb) return cb (null, clu); return true;}); return clu;} Thank you for reading! This is the end of this article on "sample Analysis of clusters in Node". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report