In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to use Docker to deploy front-end applications efficiently". In daily operation, I believe many people have doubts about how to use Docker to deploy front-end applications efficiently. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "how to use Docker to deploy front-end applications efficiently". Next, please follow the editor to study!
Let it run first.
First of all, a brief introduction to a typical front-end application deployment process
Npm install, installation dependency
Npm run build, compile, package, generate static resources
Serviced static resources, such as nginx
After introducing the deployment process, simply write a Dockerfile
FROM node:10-alpine # represents the production environment ENV PROJECT_ENV production # many package will do different behaviors according to this environment variable # in addition, packaging in webpack will also be optimized according to this environment variable, but create-react-app will write the environment variable ENV NODE_ENV production WORKDIR / code ADD when packaging. / code RUN npm install & & npm run build & & npm install-g http-server EXPOSE80 CMD http-server. / public-p 80
Now that the front-end service is up and running, you can complete the other stages of deployment.
In general, the following becomes the job of operation and maintenance, but it is always right to expand your knowledge boundaries. The other stages are described as follows
Use nginx or traefik as a reverse proxy. Traefik is used in my internal cluster. For more information, please see how to get started with traefik
Use kubernetes or docker compose for container choreography. Compose is used in my internal cluster. For more information, please see how to get started with docker compose
Use gitlab ci,drone ci or github actions for automatic CI/CD deployment.
At this time, there are two problems with the image, resulting in a long time for each deployment, which is not conducive to the rapid delivery of the product. Without rapid delivery, there is no agile development (Agile).
It takes too long to build an image.
The size of the build image is too large, sometimes even 1G +
Make use of mirror cache
We note that package.json is relatively stable relative to the source files of the project. If there is no new installation package to download, you do not need to rebuild the dependency when you build the image again. You can save half of your time on npm install.
For ADD, if the checksum of the content of the file that needs to be added has not changed, you can take advantage of the cache. It is a good choice to separate the package.json/package-lock.json from the source file to write to the image. At present, if there is no new installation package update, you can save half the time.
FROM node:10-alpine ENV PROJECT_ENV production ENV NODE_ENV production # http-server does not change can also use the cache RUN npm install-g http-server WORKDIR / code # to add these two files for the first time, making full use of the cache ADD package.json package-lock.json / code RUN npm install-production ADD. / code RUN npm run build EXPOSE80 CMD http-server. / public-p 80
There are more details about using caching and need to pay special attention to. For example, RUN git clone, if the command string is not updated, caching is used, which can cause problems when the command is non-idempotent.
For caching and possible problems, you can refer to my article Dockerfile best practices
Optimization in CI Environment
FROM node:10-alpine ENV PROJECT_ENV production ENV NODE_ENV production # http-server can also be used to add these two files for the first time using the cache RUN npm install-g http-server WORKDIR / code # to make full use of the cache ADD package.json package-lock.json / code RUN npm ci ADD. / code RUN npm run build EXPOSE80 CMD http-server. / public-p 80
A major change has been made in the CI environment: using npm ci instead of npm I, npm ci can reduce the dependency installation time by nearly half.
$npm install added 1154 packages in 60s $npm ci added 1154 packages in 35s
In addition, when the versions of package.json and package-lock.json do not match, npm ci will report an exception, detect unsafe information in advance, detect the problem as early as possible, and solve the problem as soon as possible.
Multi-stage construction
Thanks to caching, the build time of the image is now much faster. However, at this time, the size of the image is still too large, which will also lead to longer deployment time. The reasons are as follows
Consider the process for each CI/CD deployment
Build the image on the build server (Runer)
Push the image to the image warehouse server
Pull the image from the production server and start the container
Obviously, excessive image size will cause transmission inefficiency during upload and download in the first two steps, increasing the delay of each deployment.
Even if the build server is under the same node as the production server, there is no latency problem (almost impossible). Reducing the mirror volume can also save disk space.
The fact that the mirror volume is too large is entirely due to the infamous size of node_modules:
Volume of node_modules
But in the end, we only need to build the generated static resources, for the source files and files under the node_modules, the volume is too large and unnecessary, resulting in waste.
At this point, we can make use of the multi-stage construction of Docker to extract only the compiled files, that is, the static resources generated by packaging, and make an improvement to Dockerfile.
FROM node:10-alpine as builder ENV PROJECT_ENV production ENV NODE_ENV production # http-server can also make use of the cache WORKDIR / code ADD package.json package-lock.json / code RUN npm ci ADD. / code RUN npm run build # Select a smaller base image FROM nginx:10-alpine COPY-- from=builder / code/public / usr/share/nginx/html
At this point, the mirror volume has changed from 1G + to 50m +. If the deployment at this time is only in a test environment or multi-branch environment to facilitate testing, then it is done and the problem is solved perfectly.
Using object Storage Service (OSS)
If you analyze the image volume of 50m +, the image of nginx:10-alpine is 16m, and the remaining 40m are static resources. Static resources in a production environment are often maintained on separate domain names and accelerated using CDN.
If you upload static resources to the file storage service, namely OSS, and use CDN to accelerate OSS, there is no need to enter the image. In the production environment, there is also a strong demand for CDN on static resources.
At this point, the mirror size will be controlled below 20m. Although the size of the image is greatly reduced, it increases the complexity and time it takes to build the image (such as uploading to OSS), so it is not necessary to use OSS for test or branch environments.
With regard to static resources, they can be classified into two parts:
/ build, which is referenced by require/import in the project, is packaged by webpack with a hash value, and the resource address is modified through publicPath. You can upload such files to oss and add a permanent cache without having to enter the image.
/ static, this kind of file refers to the root path directly in the project and enters the image directly. If uploading to OSS, it may increase the complexity (batch modify publicPath)
At this point, a script command npm run uploadOss is used to upload static resources to OSS. The updated Dockerfile is as follows
FROM node:10-alpine as builder ENV PROJECT_ENV production ENV NODE_ENV production # http-server can also make use of the cache WORKDIR / code ADD package.json package-lock.json / code RUN npm ci ADD. / code # npm run uploadOss is a script file that uploads static resources to oss & & npm run uploadOss # choose a smaller basic image FROM nginx:10-alpine COPY-- from=builder code/public/index.html code/public/favicon.ico / usr/share/nginx/html/ COPY-- from=builder code/public/static / usr/share/nginx/html/static. This ends the study of "how to use Docker to deploy front-end applications efficiently". I hope I can solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.