Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Container-based one-stop Command Line tool chain in Node.js

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces how to use the container-based one-stop command line tool chain in Node.js, which has a certain reference value, and interested friends can refer to it. I hope you can learn a lot after reading this article.

Background and abstract

Due to the rapid growth of the number of projects, Getui encountered the following problems in the process of implementing Node.js-based micro-service development:

Each new project requires a dependency to be installed. These dependencies are basically similar but have subtle differences.

Each new project needs to be configured with a similar configuration (such as tsconfig, lint rules, etc.)

The local Mac environment is inconsistent with the Linux environment within the online Docker (especially if you have C++ dependency).

In order to solve the above problems, a command line gadget has been developed internally to standardize the project initialization process, simplify configuration and even zero configuration, and provide a consistent building and running environment based on Docker.

CLI: init, build, test & pack

When we create a new Node.js project, we usually:

Install many development dependencies: TypeScript, Jest, TSLint, benchmark, typedoc, etc.

Configure tsconfig, lint rules, .prettierrc, etc.

Installation depends on many projects: koa, lodash, sequelize, ioredis, zipkin, node-fetch, etc.

Initialize the directory structure

Configure the CI script.

Usually, we will choose to copy a ready-made project for modification, resulting in a large number of seemingly similar but not exactly the same projects, such as ten projects may correspond to ten configuration combinations. For developers who span multiple projects at the same time, many configuration combinations can make their work more difficult. Moreover, when a security audit finds a security breach in some npm package, the developer needs to review and fix each project that references these packages one by one.

In a certain development scenario, almost all projects have similar development dependencies and similar development configurations, so we wrote an init tool based on commander.js, which opens a command-line wizard to automatically install dependencies and initialize the project directory structure and configuration. As a result, the project is created and all the configurations are shrunk to specific templates according to the scene.

Then, we have build, test, pack commands, host tsconfig, jest configuration, packaging configuration, automatically call tsc compilation, build the test environment, and then call Jest for testing and standardized packaging. CI scripts can basically be reduced to several lines of standard scripts.

CLI: Docker Build

Before introducing this command, you need to have a brief understanding of a push image system:

As mentioned earlier, we encapsulated most of the dependencies into a npm package. This layer of encapsulation is also reflected in the push Docker image system, which can be simply expressed as the following Dockerfile:

# DockerfileFROM node:10RUN mkdir-p / usr/local/lib/webnode/node_modules of the common dependency layer\ & & cd / usr/local/lib/webnode\ & & DockerfileFROM getui/webnode:1.2.3COPY package*.json. / RUN npm installCOPY of the npm install webnodeENV NODE_PATH / usr/local/lib/webnode/node_modules# project. .

When you put this layer of dependency directly into the Docker image, although the SIZE of each mirror is still more than 1G, the UNIQUE SIZE of each mirror is very small, with only a few M difference layers.

A simple comparison, for example, has 800m common system dependency + 200m npm dependency + 1m service code, then since each service has a large number of npm install repeated dependencies, there will be a total UNIQUE SIZE of 800m + 200m 20 + 1m 20 = 4.82G for 20 services. With dependent hierarchical sharing, there is only 800m + 200m + 1m * 20 = 1.02G total UNIQUE SIZE. After considering multiple versions of the application, the storage advantage of relying on hierarchical sharing will be more obvious.

At the cost of relying on locking cycles and control, we get:

Reduce the possibility of relying on combination and version combination, simplify developers' choice of packages, simplify initialization projects, simplify audit and simplify security updates.

CI speeds up significantly and saves waiting time.

The pressure of transmission and storage is much reduced.

Public dependencies are used by multiple projects and are more fully tested.

The webnode docker build command helps simplify the building process of Docker image, with a built-in Dockerfile and dockerignore, and when it runs, it automatically builds the docker image based on these two files and the current Context. Dockerfile contains some optimizations and our best practices, developers only need to focus on the development of Node.js projects, this command can be responsible for configuring file permissions and other operations and generating standardized, optimized Docker images.

Its design objectives are:

Fast: reasonably rely on layering, maximize the use of Docker cache mechanism, cut unnecessary Context through .docker, so you can achieve fast construction speed.

Small: do the hierarchical design of Docker according to the change frequency, apply multi-stage build, and reduce a mirrored UNIQUE SIZE as much as possible.

Reproducible: the same content always builds the same result.

Taking node_modules dependency optimization as an example, there is a big difference between the following two kinds of Dockerfile:

FROM getui/webnode:1.2.3COPY. .RUN npm installFROM getui/webnode:1.2.3COPY package*.json. / RUN npm installCOPY. .

The former, each time docker build, whenever any code in the project changes, the npm install cache will be invalidated and need to be reinstalled, while the latter will trigger a re-npm install only when the package*.json changes. In addition, we will precompile the package.json to keep only the relevant fields to avoid re-npm install by changing the version number of the package.json.

Webnode docker build can not only help developers to build unified images, unify best practices, and save resources, but also avoid all developers need to contact with optimization details, saving time and effort.

CLI: Webnode Docker Start

In the process of local debugging and development, we encountered some problems caused by environmental differences:

The production environment is inconsistent with the local development environment Node.js version.

Some npm containing C++ code depends on the cross-platform problem of running.

File permissions configuration and system directory structure are not completely consistent with the online running environment.

The startup initialization process is inconsistent (such as configuring prefetching).

Developers often lack some binary tools locally or have inconsistent versions (such as consul-template, nc, etc.).

Unlike starting the Node.js program directly locally, this command gives priority to building a Docker image based on the current project using the above webnode docker build command, and then starts the image.

Docker can help resolve environmental differences:

It is easy to carry a version of Node.js and other binary dependencies consistent with the production environment.

Consistent initialization process.

Easy to run npm dependency with C++.

File permissions and directory structure are consistent with the online running environment.

There is a slight change in the containerized Node.js debugging method. You need to expose the Inspector port of Node.js, and then match the localRoot and remoteRoot of Visual Studio Code:

WEBNODE_HOST=$ {WEBNODE_HOST:-127.0.0.1} WEBNODE_PORT=$ {WEBNODE_PORT:-3000} DOCKER_RUN_OPTIONS= "$DOCKER_RUN_OPTIONS\-it\-- rm\-- network=\" getui-dev\ "- p $WEBNODE_HOST:$WEBNODE_PORT:3000\-p 127.0.0.1 WEBNODE_PORT=$ 9229 NODE_FLAGS=--inspect=0.0.0.0:9229\-- name $CONTAINER" docker Run\ $DOCKER_RUN_OPTIONS\ $DOCKER_IMAGE_TAG {"version": "0.2.0" "configurations": [{"type": "node", "request": "attach", "name": "Attach Local WebNode", "address": "127.0.0.1", "port": 9229, "restart": true, "protocol": "inspector" "localRoot": "${workspaceFolder}", "remoteRoot": "YOUR_REMOTE_ROOT", "sourceMaps": true},]} develop CLI tools based on containers

Container-based development can bring many benefits. First, it is easy to distribute, Docker-based Tag, developers can easily do based on small versions, large versions, branch distribution, like nvm to switch versions.

Second, CLI scripts do not have to consider cross-platform compatibility issues everywhere, such as:

Sed's inconsistent working behavior under Linux and Mac, and so on.

Some environments have Python 3, some environments only have Python 2.

All dependencies are brought in through the container, concise and efficient.

We also encountered some problems in the development of Docker-based tools:

First, the UID/GID inside and outside the container is inconsistent. If you run docker run as a non-ROOT user, the file permissions generated by the programs in the container in the mounted directory are inconsistent with those of the current user.

Docker for Mac has some special behaviors for file permissions, which can be found in: https://docs.docker.com/docker-for-mac/osxfs/#ownership

In the case where Host is Linux, especially in CI, you need to consider the problem of UID/GID. For this case, we choose to overwrite the entrypoint, and then use gosu to reduce the weight to deal with it.

CLI_EXEC_UID=$ {CLI_EXEC_UID:-0} CLI_EXEC_GID=$ {CLI_EXEC_GID:-0} exec gosu $CLI_EXEC_UID:$CLI_EXEC_GID env "$@"

In fact, RedHat's daemonless used to design container runtime (such as podman) is very suitable for CLI tools, which can be run by rootless while respecting the permission configuration of the system. However, it is not yet mature, and the adoption rate in the industry is not high, so it still needs to wait and see.

Second, sometimes the speed of docker run is slow, and the solution to push is to start a docker run-- detach when starting for the first time, and then the subsequent CLI execution is carried out entirely through docker exec, which avoids the overhead of starting each time the command is executed, and the speed is obviously improved.

Thank you for reading this article carefully. I hope the article "how to use a container-based command line tool chain in Node.js" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report