In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Editor to share with you the case of using docker to deploy front-end applications. I hope you will get something after reading this article. Let's discuss it together.
Docker is becoming more and more popular, it can easily and flexibly isolate the environment, expand capacity, and facilitate operation and maintenance management. It is also easier for developers to develop, test and deploy.
Most importantly, when you are faced with an unfamiliar project, you can follow the Dockerfile and even get it running locally without even looking at the documentation (which is not necessarily complete or correct).
Now I put great emphasis on the concept of devops. I put the five big characters of devops on the desktop of my computer and made things known for a day. It suddenly became clear that devops means to write a Dockerfile to run the app (joking.
Here is how to deploy front-end applications using Docker. A long journey, starting with a small step, means to make it run first.
Let it run first.
First of all, a brief introduction to a typical front-end application deployment process
Npm install, installation dependency
Npm run build, compile, package, generate static resources
Service-oriented static resources
After introducing the deployment process, simply write a Dockerfile
FROM node:alpine# represents the production environment ENV PROJECT_ENV productionWORKDIR / codeADD. / codeRUN npm install & & npm run build & & npm install-g http-serverEXPOSE 80CMD http-server. / public-p 80
Now the front-end service is already running. Next you can complete the other stages of the deployment. In general, the following becomes the job of operation and maintenance, but it is always right to expand your knowledge boundaries.
Use nginx or traefik as a reverse proxy
Use kubernetes or compose for choreography.
Use gitlab ci or drone ci to do CI/CD
At this time, there are two problems with the image, which leads to a long time for each deployment, which is not conducive to the rapid delivery of the product.
It takes too long to build an image.
The size of the build image is too large, 1G +
Start with dependencies and devDependencies.
Lu Xiaofeng said that if a front-end programmer works eight hours a day, at least two hours will be wasted. One hour is spent on npm install and the other hour on npm run build.
For each deployment, if you can reduce the download of useless packages, you can save a lot of image build time. Code style testing modules such as eslint,mocha,chai can be put into devDependencies. Use npm install-production to pack packages in a production environment.
For the difference between the two, please refer to the document https://docs.npmjs.com/files/package.json.html#dependencies.
FROM node:alpineENV PROJECT_ENV productionWORKDIR / codeADD. / codeRUN npm install-- production & & npm run build & & npm install-g http-serverEXPOSE 80CMD http-server. / public-p 80
It seems to be a little bit faster.
We note that package.json is relatively stable relative to the source files of the project. If there is no new installation package to download, you do not need to reinstall the package when you build the image again. You can save half of your time on npm install.
Make use of mirror cache
For ADD, if what needs to be added has not changed, you can take advantage of the cache. It is a good choice to separate the package.json from the source file to write to the image. At present, if there is no new installation package update, you can save half the time.
FROM node:alpineENV PROJECT_ENV production# http-server can also be used to cache RUN npm install-g http-serverWORKDIR / codeADD package.json / codeRUN npm install-- productionADD. / codeRUN npm run buildEXPOSE 80CMD http-server. / public-p 80
There are more details about using caching, which requires special attention, such as RUN git clone's cache
Refer to the official document https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache
Multi-stage construction
Thanks to caching, the build time of the image is now much faster. However, the size of the image is still too large, which will increase the deployment time of each time.
Consider the process for each CI deployment
Build the image on the build server
Push the image to the image warehouse server
Pull the image from the production server and start the container
Obviously, the large size of the image results in inefficient transmission and increases the delay of each deployment.
Even if the build server is under the same node as the production server, there is no latency problem. Reducing mirror volume can also save disk space.
About the excessive size of the mirror image, a large part of it is due to the infamous volume of node_modules.
But in the end, we only need the contents of the public folder, for the source files and files under the node_modules, the volume is too large and unnecessary, resulting in waste.
At this point, you can take advantage of the multi-stage build of Docker to extract only the compiled files.
Refer to the official document https://docs.docker.com/develop/develop-images/multistage-build/
You can also use the cache WORKDIR / codeADD package.json / codeRUN npm install-- productionADD without changing FROM node:alpine as builderENV PROJECT_ENV production# http-server. / codeRUN npm run build# choose a smaller base image FROM nginx:alpineCOPY-- from=builder / code/public / usr/share/nginx/html
At this point, the mirror volume has changed from 1G + to 50m +.
Use CDN
If you analyze the image volume of 50m +, the image of nginx:alpine is 16m, and the remaining 40m are static resources.
If you upload static resources to CDN, there is no need to enter the image, and the image size will be controlled below 20m.
With regard to static resources, they can be classified into two parts
/ static, this kind of file refers to the root path directly in the project, and is copied into / public when packaging, and needs to be mirrored.
/ build, this kind of file needs to be used by require, and will be packaged by webpack with a hash value, and the resource address can be modified through publicPath. You can upload such files to cdn and add a permanent cache without having to enter the image.
You can also use the cache WORKDIR / codeADD package.json / codeRUN npm install-- productionADD without changing FROM node:alpine as builderENV PROJECT_ENV production# http-server. / code# npm run uploadCdn is a script file that uploads static resources to cdn & & npm run uploadCdn# chooses a smaller basic image FROM nginx:alpineCOPY-- from=builder code/public/index.html code/public/favicon.ico / usr/share/nginx/html/COPY-- from=builder code/public/static / usr/share/nginx/html/static. After reading this article, I believe you have some understanding of "the case of deploying front-end applications using docker". If you want to know more related knowledge, welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.