Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use stream Module in Node.js

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces how to use the stream module in Node.js, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.

Type of Node.js stream

Node.js stream provides four types of streams

Readable stream (Readable Streams)

Writable stream (Writable Streams)

Duplex flow (Duplex Streams)

Conversion stream (Transform Streams)

Please see the Node.js official documentation for more details.

Https://nodejs.org/api/stream.html#stream_types_of_streams

Let's look at each type of flow at a high level.

Readable stream

Readable streams can read data from a specific data source, most commonly from a file system. Other common uses of readable streams in Node.js applications are:

Process.stdin-user input is read in the terminal application through stdin.

Http.IncomingMessage-read the incoming request content in the HTTP service or read the server's HTTP response in the HTTP client.

Writable stream

You can use writeable streams to write data from an application to a specific place, such as a file.

Process.stdout can be used to write data into standard output and be used internally by console.log.

Then there are duplex and transformation flows, which can be defined as mixed flow types based on readable and writable streams.

Duplex flow

A duplex stream is a combination of a readable stream and a writable stream, which can either write data to a specific place or read data from a data source. The most common case of duplex flow is net.Socket, which is used to read and write data from socket.

It is important that the operations of the readable and writable sides of a duplex flow are independent of each other and that data does not flow from one end to the other.

Conversion stream

The transformation flow is slightly similar to the duplex flow, but in the transformation flow, the readable end and the writable end are associated.

The crypto.Cipher class is a good example of an encrypted stream. Through the crypto.Cipher stream, the application can write plain text data to the writable end of the stream and read the encrypted ciphertext from the readable end of the stream. This type of flow is called a transformation flow because of its transformation nature.

Note: another transformation stream is stream.PassThrough. Stream.PassThrough transfers data from the writable side to the readable side without any conversion. This may sound redundant, but Passthrough flows are very helpful for building custom flows and flow pipelines. (for example, create multiple copies of the data of a stream)

Read data from a readable Node.js stream

Once the readable stream is connected to the source of the production data, such as a file, you can read the data through the stream in several ways.

First, create a simple text file called myfile that is 85 bytes in size and contains the following string:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur nec mauris turpis.

Now, let's look at two different ways to read data from a readable stream.

1. Listen for data events

The most common way to read data from a readable stream is to listen for data events emitted by the stream. The following code demonstrates this approach:

Const fs = require ('fs') const readable = fs.createReadStream ('. / myfile', {highWaterMark: 20}); readable.on ('data', (chunk) = > {console.log (`Read ${chunk.length} bytes\ n "${chunk.toString ()}"\ n`);})

The highWaterMark attribute is passed to fs.createReadStream as an option to determine how much data buffers are in the stream. The data is then flushed to the reading mechanism (in this case, our data handler). By default, the highWaterMark value of a readable fs stream is 64kb. We deliberately override the value of 20 bytes to trigger multiple data events.

If you run the above program, it will read 85 bytes from myfile in five iterations. You will see the following output in console:

Read 20 bytes "Lorem ipsum dolor si" Read 20 bytes "t amet, consectetur" Read 20 bytes "adipiscing elit. Cur" Read 20 bytes "abitur nec mauris tu" Read 5 bytes "rpis. 2. Use an asynchronous iterator

Another way to read data from a readable stream is to use an asynchronous iterator:

Const fs = require ('fs') const readable = fs.createReadStream ('. / myfile', {highWaterMark: 20}); (async () = > {for await (const chunk of readable) {console.log (`Read ${chunk.length} bytes\ n "${chunk.toString ()}"\ n`);}) ()

If you run this program, you will get the same output as the previous example.

Status of readable Node.js streams

When a listener listens to the data event of a readable stream, the state of the stream switches to the "flow" state (unless the stream is explicitly paused). You can check the "flow" status of a stream through the readableFlowing property of the stream object

We can modify the previous example slightly to demonstrate through the data processor:

Const fs = require ('fs') const readable = fs.createReadStream ('. / myfile', {highWaterMark: 20}); let bytesRead = 0console.log (`before attaching 'data' handler. Is flowing: ${readable.readableFlowing} `); readable.on ('data', (chunk) = > {console.log (`Read ${chunk.length} bytes`); bytesRead + = chunk.length / / stop reading if (bytesRead = 60) {readable.pause () console.log (`after pause () call) after reading 60 bytes from the readable stream. Is flowing: ${readable.readableFlowing} `); / / continue reading setTimeout (() = > {readable.resume () console.log (`after resume () call) after waiting for 1 second. Is flowing: ${readable.readableFlowing} `);}, 1000)}}) console.log (`after attaching 'data' handler. Is flowing: ${readable.readableFlowing} `)

In this example, we read the myfile from a readable stream, but after reading 60 bytes, we temporarily paused the data stream for 1 second. We also printed the value of the readableFlowing property at different times to understand how it changed.

If you run the above program, you will get the following output:

Before attaching 'data' handler. Is flowing: nullafter attaching 'data' handler. Is flowing: trueRead 20 bytesRead 20 bytesRead 20 bytesafter pause () call. Is flowing: falseafter resume () call. Is flowing: trueRead 20 bytesRead 5 bytes

We can explain the output with the following:

When our program starts, the value of readableFlowing is null, because we don't provide any mechanism for consumption streams.

After connecting to the data processor, the readable changes to "flow" mode, and the readableFlowing becomes true.

Once 60 bytes are read, the stream is paused by calling pause (), and readableFlowing is converted to false.

After waiting for 1 second, by calling resume (), the stream is switched to "flow" mode again, and readableFlowing is changed to `true'. Then the rest of the file content flows in the stream.

Processing large amounts of data through Node.js streams

Because of streaming, applications do not need to keep large binary objects in memory: small blocks of data can be received and processed.

In this section, let's combine different streams to build a real application that can handle large amounts of data. We will use a small utility program to generate the SHA-256 of a given file.

But first, we need to create a large fake 4GB file to test. You can do this with a simple shell command:

On macOS: mkfile-n 4G 4gb_file

On Linux: xfs_mkfile 4096m 4gb_file

After we have created the fake file 4gb_file, let's generate the SHA-256 hash to the file without using the stream module.

Const fs = require ("fs"); const crypto = require ("crypto"); fs.readFile (". / 4gb_file", (readErr, data) = > {if (readErr) return console.log (readErr) const hash = crypto.createHash ("sha256") .update (data) .digest ("base64"); fs.writeFile (". / checksum.txt", hash, (writeErr) = > {writeErr & console.error (err)});})

If you run the above code, you may get the following error:

RangeError [ERR_FS_FILE_TOO_LARGE]: File size (4294967296) is greater than 2 GB at FSReqCallback.readFileAfterStat [as oncomplete] (fs.js:294:11) {code: 'ERR_FS_FILE_TOO_LARGE'}

The above error occurs because the JavaScript runtime cannot handle large, random buffers. The maximum size of buffers that can be handled by the runtime depends on your operating system structure. You can view the maximum size of your operating system cache by using the buffer.constants.MAX_LENGTH variable in the built-in buffer module.

Even if the above error does not occur, it is a problem to keep large files in memory. The available physical memory we have limits the amount of memory that our applications can use. High memory usage can also cause applications to perform poorly in terms of CPU usage because garbage collection becomes expensive.

Use pipeline () to reduce the memory footprint of APP

Now, let's look at how to modify the application to use the flow and avoid this error:

Const fs = require ("fs"); const crypto = require ("crypto"); const {pipeline} = require ("stream"); const hashStream = crypto.createHash ("sha256"); hashStream.setEncoding ('base64') const inputStream = fs.createReadStream (". / 4gb_file"); const outputStream = fs.createWriteStream (". / checksum.txt"); pipeline (inputStream, hashStream, outputStream, (err) = > {err & console.error (err)})

In this example, we use the streaming method provided by the crypto.createHash function. It returns a "transform" stream object hashStream that generates hash for randomly large files.

To transfer the contents of the file to this transformation stream, we use fs.createReadStream to create a readable stream inputStream for 4gb_file. We pass the output of the hashStream transformation stream to the writable stream outputStream, while checksum.txt is created by fs.createWriteStream.

If you run the above program, you will see the SHA-256 hash of the 4GB file in the checksum.txt file.

Comparison of convection using pipeline () and pipe ()

In the previous case, we used the pipeline function to connect multiple streams. Another common method is to use the .pipe () function, as follows:

InputStream .pipe (hashStream) .pipe (outputStream)

But there are several reasons, so it is not recommended to use .pipe () in production applications. If one of the streams is closed or an error occurs, pipe () does not automatically destroy the connected stream, which can lead to application memory leaks. Similarly, pipe () does not automatically forward errors across streams to one place for processing.

Because of these problems, there is pipeline (), so it is recommended that you use pipeline () instead of pipe () to connect different streams. We can rewrite the above pipe () example to use the pipeline () function, as follows:

Pipeline (inputStream, hashStream, outputStream, (err) = > {err & & console.error (err)})

Pipeline () accepts a callback function as the last argument. Any error report from the connected stream will trigger the callback function, so it is easy to handle the error report in one place.

Summary: using Node.js streams to reduce memory and improve performance

Using streams in Node.js helps us build high-performance applications that can handle large data.

In this article, we cover:

There are four types of Node.js streams (readable stream, writable stream, duplex flow, and conversion stream).

How to read data from a readable stream by listening for data events or using an asynchronous iterator.

Reduce memory footprint by connecting multiple streams using pipeline.

A short warning: you probably won't encounter too many scenarios where you have to use streams, and flow-based solutions will increase the complexity of your application. It is important to make sure that the benefits of using flow outweigh the complexity it brings.

Thank you for reading this article carefully. I hope the article "how to use the stream Module in Node.js" shared by the editor will be helpful to everyone. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report