Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use node to improve work efficiency

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly shows you "how to use node to improve work efficiency", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to use node to improve work efficiency" this article.

In the work project, you need to rely on an external file, which is maintained by other teams, built using jenkins, and the build product is pushed to [Amazon S3] (aws.amazon.com/pm/serv-s3/ …) We need to manually download the file from S3 and copy it into the project, and the whole process can be considered automated.

In addition, there is a serious problem: the path of the build product we need in S3 is similar to that in S3, and the extra one / is actually a folder called'/', which can be properly identified by using windows's S3 Browser, probably because'/'is regarded as a file separator under mac, so several GUI tools can't recognize directories properly. Therefore, mac developers also need to use windows to download products in the virtual machine, which is extremely wasteful and pointless.

Because Amazon provides API access, I thought I could implement a script to download the update.

Process carding

No scripts are used:

Use scripts:

Jenkins → product name → execution script

This can be done directly, you can save the manual process, there will be no'/ 'bug problem.

Develop a connection to AWS

Here, we use aws-sdk provided by Amazon, use S3 client, and pass in accessKeyId and secretAccessKey to connect:

Import S3 from "aws-sdk/clients/s3"; const S3 = new S3 ({credentials: {accessKeyId, secretAccessKey}}); download file

Aws-sdk provides APIs for adding, deleting, modifying and querying bucket and files. Here, we can get the product file name built by jenkins in advance. Here, you need to download the file according to the file name and location:

Const rs = S3 .getObject ({Bucket: "your bucket name", Key: "file dir + path"}) .createReadStream ()

Bucket is the Bucket location where the file is stored, and Key is the path information of the file in S3, and the whole path is equivalent to the directory name + file name.

Here, we can get a ReadStream, and then use node.js to write directly to the local:

Const ws = fs.createWriteStream (path.join (_ _ dirname, outputfilename)); rs.pipe (ws); decompression

Decompress and use the node-tar tool and install it directly:

Npm install tar

The extract alias is x, and the tar.x method is directly used here. This method can directly deal with ReadStream, decompress the original data and write it to the file, so we can directly transfer ReadStream to tar.x here without saving the original .tar file:

-const ws = fs.createWriteStream (path.join (_ dirname, outputfilename));-rs.pipe (ws); + rs.pipe (tar.x ({C: path.join (_ _ dirname, outputfilename)}))

The pipe operation here returns the stream object, and we can listen to the finish method to handle the subsequent process:

Const s = rs.pipe (tar.x ({C: path.join (_ _ dirname, outputfilename)})); s.on ('finish', () = > {/ / do something...}) Flatten directory

The original files have subfolders, and we need to move them all to the outermost layer, so we need to do a tiling operation.

Here, fs-related API is used for reading. Fs API is divided into synchronous and asynchronous. The synchronous API function name ends with Sync, and the asynchronous function defaults to callback error first style. The corresponding promise-style asynchronous API is provided under fs/promises, and you can use it here as needed.

Since our directory has only one layer, we only do one layer of flatten. If there are multiple layers, we can use recursion to implement it:

Async function flatten (dir) {const fileAndDirs = await fsp.readdir (dir); const dirs = fileAndDirs.filter ((I) = > fs.lstatSync (path.join (dir, I)). IsDirectory (); for (const innerDir of dirs) {const innerFile = await fsp.readdir (path.join (dir, innerDir)) Await Promise.all ([innerFile .filter ((item) = > fs.lstatSync (path.join (dir, innerDir, item). IsFile ()) .map ((item) = > fsp.rename (path.join (dir, innerDir, item), path.join (dir, item)),]); remove (path.join (dir, innerDir);}} copy to the target location

Then copy it to our project directory. You only need to call copyFile API to copy. For the files you don't need, use regular expressions to configure the exclude blacklist:

Async function copy (from, to) {const files = await fsp.readdir (from); await Promise.all (files .filter ((item) = >! exclude.test (item)) .map ((item) = > fsp.copyFile (path.join (from, item), path.join (to, item);} configuration file

In actual use, the configuration file should be separated from the code, and the accessKeyId and secretAccessKey should be configured by each user, so it should be placed in a separate configuration file, which is created locally by the user and read the relevant configuration content in the main program:

/ / config.jsmodule.exports = {s3: {accessKeyId: 'accessKeyId', secretAccessKey:' secretAccessKey',}}; / / main.jsif (! fs.existsSync ('config.js')) {console.error (' please create a config file'); return;} const config = require (path.resolve (_ _ dirname, 'config.js')); pass parameters

The file name of each download needs to be passed in when it is called, and the file written in the file will be changed frequently. Here, it is passed directly as a parameter.

Node.js can be read through process.argv, argv is an array, the first element of this array is the installation path of node, the second element is the path of the currently executed script, starting with the third element is the custom parameter, so you need to start with process.argv [2]. If there are complex command line parameter requirements, you can use a command line parameter resolution library such as commander. Since only one parameter is needed in this example, you can read it directly here:

Const filename = process.argv [2]; if (! filename) {console.error ('please run script with params'); return;}

At this point, an available command line tool is complete.

The above is all the contents of the article "how to use node to improve work efficiency". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report