In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "optimizing the construction process of docker image". In daily operation, I believe many people have doubts about optimizing the construction process of docker image. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "optimizing the construction process of docker image"! Next, please follow the editor to study!
How do you build Docker images?
Let's start with the Docker build process. The Docker build is triggered by using the docker build command in the Docker CLI tool.
The docker build command builds the Docker image according to the instructions specified in the Dockerfile file. Dockerfile is a text document that contains all the ordered commands for the user to assemble the image.
Docker mirrors consist of read-only layers. Each layer represents an Dockerfile instruction. These layers are stacked on top of each other, and each layer is an increment of the previous layer. I think these layers are a form of caching. Update only the changed layers instead of updating each change.
The following example describes the contents of Dockerfile:
FROM ubuntu:18.04COPY. / appRUN make / appCMD python / app/app.py
Each instruction in this file represents a separate layer in the Docker image. The following is a brief description of each instruction:
FROM uses ubuntu:18.04 to create a layer of Docker images
COPY adds files from the directory where the Docker client is located
RUN uses make instructions to build your application
CMD specifies what commands to run in the container
When these four commands are executed during the build process, they create layers in the Docker image.
If you want to know more about images and layers, you can read about them here.
Optimize the image construction process
Now that we have introduced the building process of Docker, I would like to share some optimization suggestions to help build the image effectively.
1. Temporary container
Mirrors defined by Dockerfile should generate transient containers.
In this case, a temporary container can be destroyed, rebuilt, and replaced with a new container. Temporary containers can be thought of as disposable. Each instance is new and independent of the previous container instance.
When developing Docker images, you should take advantage of as many temporary modes as possible.
two。 Do not install unnecessary software packages
Try to avoid installing unnecessary files and software packages.
Docker images should be kept thin, which helps improve portability, shorten build time, reduce complexity, and reduce file size. For example, in most cases, do not install a text editor on a container or install any non-essential applications or services.
3. Implement the .dockerkeeper file
The .dockerkeeper file is used to declare files and directories that will not be included by the image. This helps to avoid packaging unnecessarily large or sensitive files and to avoid adding them to public images.
If you also want to exclude files that have nothing to do with the build without rebuilding the source library, use a .dockerkeeper file. It supports exclusion modes similar to .gitignore files.
4. Multi-line parameters to be sorted
Simplify future changes by sorting multiple lines of parameters as much as possible, which helps avoid package duplication and makes the list easier to update. This also makes PR easy to read and view. Adding a space before the backslash\ is also helpful.
5. Decoupling application
Applications that depend on other applications are considered "coupled".
In some cases, they are hosted on the same host or compute node, which is common in non-container deployments, but for micro-services, each application should exist in its own separate container. Decoupling the application into multiple containers makes it easier to scale and reuse containers horizontally. For example, a decoupled Web application might contain three separate containers, each with its own unique image: a container for managing Web applications, a container for managing databases, and a container for managing caches.
It is a good rule of thumb to limit each container to one process. According to your best judgment, keep the container as clean and modular as possible.
If containers are interdependent, you can ensure that these containers can communicate by using the Docker container network.
6. The number of layers is as small as possible
In the Docker build, only the RUN,COPY and ADD instructions create layers. Other instructions create temporary intermediate images that ultimately do not increase the size of the build.
You can also copy the required components to the final image, which allows you to include other tools or debugging information during the build phase without increasing the size of the final image.
7. Leverage build cach
When building the image, Docker executes each instruction in Dockerfile step by step and sequentially. In each instruction, Docker searches its cache for the image to use instead of creating a new one. This is the basic rule that Docker follows:
All child mirrors derived from this base image are compared with the images already in the cache to see if one of them was built using exactly the same instruction. If not, the cache is invalidated and rebuilt.
For the ADD and COPY directives, the contents of the image file are checked and compared with the existing image. If anything in the file, such as content and metadata, changes, the cache is invalid.
With the exception of the ADD and COPY commands, the cache does not look at the files in the container to determine if the cache matches. For example, when using the RUN apt-get-y command, the update file in the container is not checked for existence in the cache.
When the cache is invalid, all subsequent Dockerfile commands generate a new mirror and do not use the cache.
Optimizing Docker Image Construction in CI Pipeline
First of all, all the optimization concepts mentioned above are valid for building mirrors in the CI pipeline. If Dockerfile changes, leveraging caching is still the best way to reduce build time.
When building a Docker image is a regular process in the CI pipeline, you can take advantage of the Docker layer cache (DLC) to speed up the build. DLC is a great feature. DLC saves the created mirror layer during the task, and then reuses the unchanged mirror layer in subsequent builds, rather than rebuilding the entire mirror each time.
DLC can be used with an executor (machine executor) or a remote Docker environment (setup_remote_docker). Note that DLC is only useful when creating Docker mirrors using commands such as docker build,docker compose.
At this point, the study on "optimizing the docker image construction process" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.