Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How Docker Dockerfile customizes the image

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to customize the image of Docker Dockerfile, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's learn about it!

Use Dockerfile to customize the image

The customization of the image is actually customizing the configuration and files added by each layer. If we can write a script to modify, install, build, and operate each layer, and use this script to build and customize the image, then the problems that cannot be repeated, the transparency of image construction, and the problem of volume will be solved. The script is Dockerfile.

Dockerfile is a text file that contains Instruction, each of which builds a layer, so the content of each instruction is to describe how the layer should be built.

Here, take the custom nginx image as an example, using Dockerfile to customize.

In a blank directory, create a text file and name it Dockerfile:

$mkdir mynginx$ cd mynginx$ touch Dockerfile

Its contents are as follows:

FROM nginxRUN echo 'Hello, Docker' > / usr/share/nginx/html/index.html

This Dockerfile is very simple, a total of two lines. Two instructions are involved, FROM and RUN.

Detailed explanation of Dockerfile instruction

FROM specifies the base image

The so-called custom image, it must be based on an image, customized on it. FROM specifies the base image, so FROM is a necessary instruction in a Dockerfile, and it must be the first instruction.

There are many high-quality official images on Docker Store, including images of service classes that can be used directly, such as nginx, redis, mongo, mysql, etc., as well as some images that facilitate the development, construction and operation of applications in various languages, such as node, openjdk, python, etc. You can customize it by looking for an image that best suits our ultimate goal.

If the corresponding service image is not found, some more basic operating system images are also provided in the official image, such as ubuntu, debian, centos and so on. The software libraries of these operating systems provide us with more room for expansion.

In addition to selecting an existing mirror as the base mirror, Docker also has a special mirror called scratch. This image is a virtual concept, does not actually exist, it represents a blank image.

FROM scratch...

If you use scratch-based mirroring, it means that you are not based on any mirrors, and the following instructions will exist as the first layer of the mirrors.

It is not uncommon to copy executable files directly into the mirror without any system basis, such as swarm, coreos/etcd. For statically compiled programs under Linux, there is no need for an operating system to provide runtime support, and all the necessary libraries are already in the executable, so direct FROM scratch will make the image smaller. Many applications developed in the Go language use this way to create images, which is one of the reasons why some people think that Go is a particularly suitable language for container microservice architecture.

RUN executes command

The RUN instruction is used to execute command-line commands. Because of the power of the command line, the RUN instruction is one of the most commonly used instructions when customizing images. There are two formats:

Shell format: RUN, just like a command entered directly on the command line. This is the format of the RUN instruction in the Dockerfile you just wrote.

RUN echo 'Hello, Docker' > / usr/share/nginx/html/index.html

Exec format: RUN ["executable", "parameter 1", "parameter 2"], which is more like the format in function calls.

Since RUN can execute commands just like Shell scripts, can we map each command to a RUN like a Shell script? Like this:

FROM debian:jessieRUN apt-get updateRUN apt-get install-y gcc libc6-dev makeRUN wget-O redis.tar.gz "http://download.redis.io/releases/redis-3.2.5.tar.gz"RUN mkdir-p / usr/src/redisRUN tar-xzf redis.tar.gz-C / usr/src/redis-- strip-components=1RUN make-C / usr/src/redisRUN make-C / usr/src/redis install

As mentioned earlier, every instruction in Dockerfile sets up a layer, and RUN is no exception. The behavior of each RUN is the same as the process of manually creating a mirror: create a new layer, execute these commands on it, and after execution, the changes in the commit layer form a new image.

The above method of writing creates seven layers of mirrors. This is completely pointless, and a lot of things that are not needed at runtime are loaded into the image, such as the compiled environment, updated software packages, and so on. The result is a very bloated, very multi-layered image, which not only increases the time it takes to build deployment, but is also error-prone. This is a common mistake made by many beginners of Docker (I can't forgive myself either).

There is a maximum number of layers for Union FS. For example, AUFS, which used to be no more than 42 layers, is now limited to 127layers.

The above Dockerfile should be correctly written as follows:

FROM debian:jessieRUN buildDeps='gcc libc6-dev make'\ & apt-get update\ & & apt-get install-y $buildDeps\ & & wget-O redis.tar.gz "http://download.redis.io/releases/redis-3.2.5.tar.gz"\ & & mkdir-p / usr/src/redis\ & & tar-xzf redis.tar.gz-C / usr/src/redis-- strip-components=1\ & & make-C / usr/src/ Redis\ & & make-C / usr/src/redis install\ & & rm-rf / var/lib/apt/lists/*\ & & rm redis.tar.gz\ & & rm-r / usr/src/redis\ & apt-get purge-y-- auto-remove $buildDeps

First of all, all the previous commands have only one purpose, which is to compile and install the redis executable. So there's no need to build many layers, it's just an one-tier thing. So instead of using many RUN pairs to correspond to different commands one by one, you just use a RUN instruction and use & & to concatenate the required commands. Simplify the previous 7 layers to 1 layer. When writing Dockerfile, always remind yourself that this is not a Shell script, but a definition of how each layer should be built.

Also, there is a line break here for formatting. Dockerfile supports the newline method of adding\ at the end of the line of the Shell class, and the format of commenting at the beginning of the line #. Good formatting, such as line wrapping, indentation, comments, etc., will make maintenance and troubleshooting easier, which is a good habit.

In addition, you can see that the cleanup command is added at the end of this set of commands, removing the software needed to compile the build, cleaning all downloaded and expanded files, and cleaning the apt cache files. This is a very important step. As mentioned before, the mirror is multi-tier storage, and the things in each layer will not be deleted in the next layer, but will always follow the mirror. So when building a mirror, make sure that only what you really need to add to each layer is added, and that anything irrelevant should be cleaned up.

One of the reasons why many beginners of Docker create bloated images is to forget that irrelevant files must be cleaned up at the end of each layer of build.

Build an image

All right, let's go back to the Dockerfile of the previously customized nginx image. Now that we understand the contents of the Dockerfile, let's build the image.

Execute in the same directory as the Dockerfile file:

$docker build-t nginx:v3 .Sending build context to Docker daemon 2.048 kBStep 1: FROM nginx--- > e43d811ce2f4Step 2: RUN echo 'Hello, Docker' > / usr/share/nginx/html/index.html--- > Running in 9cdc27646c7bMurray-> 44aa4490ce2cRemoving intermediate container 9cdc27646c7bSuccessfully built 44aa4490ce2c

From the output of the command, we can clearly see the construction process of the image. In Step 2, as we said earlier, the RUN directive starts a container 9cdc27646c7b, executes the required command, and finally submits this layer of 44aa4490ce2c, and then removes the container 9cdc27646c7b used.

Here we use the docker build command to build the image. Its format is:

Docker build [options]

Here we specify the name of the final image-t nginx:v3. After the construction is successful, we can run the image directly, and as a result, our home page is changed to Hello, Docker!.

Mirror build context (Context)

If you notice that there is one at the end of the docker build command. . . Represents the current directory, and Dockerfile is in the current directory, so many beginners think that this path is specifying the path where Dockerfile is located, which is actually not accurate. If you correspond to the above command format, you may find that this is in the specified context path. So what is context?

First of all, we need to understand how docker build works. At run time, Docker is divided into Docker engines (that is, server-side daemons) and client tools. Docker's engine provides a set of REST API called DockerRemote API, and client tools such as the docker command interact with the Docker engine through this set of API to perform various functions. So although on the surface we seem to be performing various docker functions natively, in fact, everything is done on the server side (the Docker engine) in the form of remote invocations. Also because of this Cramp S design, it is easy for us to operate the Docker engine of the remote server.

When we build an image, not all customization is done through the RUN directive, and it is often necessary to copy some local files into the image, such as through the COPY instruction, the ADD instruction, and so on. The docker build command to build the image is not built locally, but on the server side, that is, in the Docker engine. So in this client / server architecture, how can the server get the local files?

This introduces the concept of context. When building, the user will specify the path to build the image context. When the docker build command knows this path, it will package all the contents under the path and upload it to the Docker engine. So that when the Docker engine receives the context package, the deployment will get all the files needed to build the image.

If you write this in Dockerfile:

COPY. / package.json / app/

This is not to copy the package.json in the directory where the docker build command is executed, nor to copy the package.json in the directory where the Dockerfile is located, but to copy the package.json in the context directory.

Therefore, the paths to the source files in instructions such as COPY are relative paths. This is why beginners often ask why COPY.. / package.json / app or COPY / opt/xxxx / app doesn't work because these paths are out of context and the Docker engine cannot get the files in these locations. If you really need those files, you should copy them to the context directory.

Now you can understand the command docker build-t nginx:v3. This one in. The docker build command packages the contents of the specified context directory to the Docker engine to help build the image.

If you look at the docker build output, we have actually seen the process of sending the context:

$docker build-t nginx:v3 .Sending build context to Docker daemon 2.048 kB...

Understanding the build context is important for mirror builds to avoid making mistakes that should not be made. For example, some beginners find that COPY / opt/xxxx / app is not working, so they simply put Dockerfile in the root directory of the hard disk to build, only to find that after docker build is executed, it is very slow and easy to build something with dozens of GB. That's because it's asking docker build to package the entire hard drive, which is obviously a misuse.

In general, Dockerfile should be placed in an empty directory, or under the project root. If you don't have the required files in this directory, you should make a copy of the required files. If there's something in the directory that you really don't want to pass to the Docker engine at build time, you can write a .dockerkeeper in the same syntax as .gitignore, which is used to weed out things that don't need to be passed to the Docker engine as context.

So why would anyone mistakenly think. Do you specify the directory where the Dockerfile is located? This is because by default, the file named Dockerfile in the context directory is used as the Dockerfile if you do not specify an additional Dockerfile.

This is only the default behavior. In fact, the file name of Dockerfile does not have to be Dockerfile, and it does not have to be in a context directory. For example, you can specify a file as Dockerfile with the-f.. / Dockerfile.php parameter.

Of course, it is customary to use the default file name Dockerfile and place it in the image build context directory.

Other uses of docker build

Build directly with Git repo

As you may have noticed, docker build also supports building from URL, for example, directly from Git repo:

$docker build https://github.com/twang2218/gitlab-ce-zh.git#:8.14docker build https://github.com/twang2218/gitlab-ce-zh.git\#:8.14Sending build context to Docker daemon 2.048 kBStep 1: FROM gitlab/gitlab-ce:8.14.0-ce.08.14.0-ce.0: Pulling from gitlab/gitlab-ceaed15891ba52: Already exists773ae8583d14: Already exists...

This command specifies the Git repo required for the build, and specifies the default master branch, the build directory is / 8.14 /, and then Docker will go to git clone the project, switch to the specified branch, and go to the specified directory to start the build.

Build with a given tar package

$docker build http://server/context.tar.gz

If the URL given is not a Git repo, but a tar package, the Docker engine downloads the package, automatically unzips it, and starts building in its context.

Read Dockerfile from standard input to build

Docker build-

< Dockerfile 或 cat Dockerfile | docker build - 如果标准输入传入的是文本文件,则将其视为 Dockerfile ,并开始构建。这种形式由于直接从标准输入中读取 Dockerfile 的内容,它没有上下文,因此不可以像其他方法那样可以将本地文件 COPY 进镜像之类的事情。 从标准输入中读取上下文压缩包进行构建 $ docker build - < context.tar.gz 如果发现标准输入的文件格式是 gzip 、 bzip2 以及 xz 的话,将会使其为上下文压缩包,直接将其展开,将里面视为上下文,并开始构建。 COPY 复制文件 格式: COPY ... COPY ["",... ""] 和 RUN 指令一样,也有两种格式,一种类似于命令行,一种类似于函数调用。COPY 指令将从构建上下文目录中 的文件/目录复制到新的一层的镜像内的 位置。比如: COPY package.json /usr/src/app/ 可以是多个,甚至可以是通配符,其通配符规则要满足 Go 的 filepath.Match 规则,如: COPY hom* /mydir/COPY hom?.txt /mydir/ 可以是容器内的绝对路径,也可以是相对于工作目录的相对路径(工作目录可以用 WORKDIR 指令来指定)。目标路径不需要事先创建,如果目录不存在会在复制文件前先行创建缺失目录。 此外,还需要注意一点,使用 COPY 指令,源文件的各种元数据都会保留。比如读、写、执行权限、文件变更时间等。这个特性对于镜像定制很有用。特别是构建相关文件都在使用 Git进行管理的时候。 ADD 更高级的复制文件 ADD 指令和 COPY 的格式和性质基本一致。但是在 COPY 基础上增加了一些功能。比如 可以是一个 URL ,这种情况下,Docker 引擎会试图去下载这个链接的文件放到 去。下载后的文件权限自动设置为 600 ,如果这并不是想要的权限,那么还需要增加额外的一层 RUN 进行权限调整,另外,如果下载的是个压缩包,需要解压缩,也一样还需要额外的一层 RUN 指令进行解压缩。所以不如直接使用 RUN 指令,然后使用 wget 或者 curl 工具下载,处理权限、解压缩、然后清理无用文件更合理。因此,这个功能其实并不实用,而且不推荐使用。 如果 为一个 tar 压缩文件的话,压缩格式为 gzip , bzip2 以及 xz 的情况下, ADD 指令将会自动解压缩这个压缩文件到 去。 在某些情况下,这个自动解压缩的功能非常有用,比如官方镜像 ubuntu 中: FROM scratchADD ubuntu-xenial-core-cloudimg-amd64-root.tar.gz /... 但在某些情况下,如果我们真的是希望复制个压缩文件进去,而不解压缩,这时就不可以使用 ADD 命令了。 在 Docker 官方的 Dockerfile 最佳实践文档 中要求,尽可能的使用 COPY ,因为 COPY 的语义很明确,就是复制文件而已,而 ADD 则包含了更复杂的功能,其行为也不一定很清晰。最适合使用 ADD 的场合,就是所提及的需要自动解压缩的场合。 另外需要注意的是, ADD 指令会令镜像构建缓存失效,从而可能会令镜像构建变得比较缓慢。 因此在 COPY 和 ADD 指令中选择的时候,可以遵循这样的原则,所有的文件复制均使用 COPY 指令,仅在需要自动解压缩的场合使用 ADD 。 CMD 容器启动命令 CMD 指令的格式和 RUN 相似,也是两种格式: shell 格式: CMD exec 格式: CMD ["可执行文件", "参数1", "参数2"...] 参数列表格式: CMD ["参数1", "参数2"...] 。在指定了 ENTRYPOINT 指令后,用 CMD 指定具体的参数。 之前介绍容器的时候曾经说过,Docker 不是虚拟机,容器就是进程。既然是进程,那么在启动容器的时候,需要指定所运行的程序及参数。 CMD 指令就是用于指定默认的容器主进程的启动命令的。 在运行时可以指定新的命令来替代镜像设置中的这个默认命令,比如, ubuntu 镜像默认的CMD 是 /bin/bash ,如果我们直接 docker run -it ubuntu 的话,会直接进入 bash 。我们也可以在运行时指定运行别的命令,如 docker run -it ubuntu cat /etc/os-release 。这就是用 cat /etc/os-release 命令替换了默认的 /bin/bash 命令了,输出了系统版本信息。 在指令格式上,一般推荐使用 exec 格式,这类格式在解析时会被解析为 JSON 数组,因此一定要使用双引号 " ,而不要使用单引号。 如果使用 shell 格式的话,实际的命令会被包装为 sh -c 的参数的形式进行执行。比如: CMD echo $HOME 在实际执行中,会将其变更为: CMD [ "sh", "-c", "echo $HOME" ] 这就是为什么我们可以使用环境变量的原因,因为这些环境变量会被 shell 进行解析处理。提到 CMD 就不得不提容器中应用在前台执行和后台执行的问题。这是初学者常出现的一个混淆。 Docker 不是虚拟机,容器中的应用都应该以前台执行,而不是像虚拟机、物理机里面那样,用 upstart/systemd 去启动后台服务,容器内没有后台服务的概念。 初学者一般将 CMD 写为: CMD service nginx start 然后发现容器执行后就立即退出了。甚至在容器内去使用 systemctl 命令结果却发现根本执行不了。这就是因为没有搞明白前台、后台的概念,没有区分容器和虚拟机的差异,依旧在以传统虚拟机的角度去理解容器。 对于容器而言,其启动程序就是容器应用进程,容器就是为了主进程而存在的,主进程退出,容器就失去了存在的意义,从而退出,其它辅助进程不是它需要关心的东西。 而使用 service nginx start 命令,则是希望 systemd 来以后台守护进程形式启动 nginx 服务。而刚才说了 CMD service nginx start 会被理解为 CMD [ "sh", "-c", "service nginxstart"] ,因此主进程实际上是 sh 。那么当 service nginx start 命令结束后, sh 也就结束了, sh 作为主进程退出了,自然就会令容器退出。 正确的做法是直接执行 nginx 可执行文件,并且要求以前台形式运行。比如: CMD ["nginx", "-g", "daemon off;"] ENTRYPOINT 入口点 ENTRYPOINT 的格式和 RUN 指令格式一样,分为 exec 格式和 shell 格式。 ENTRYPOINT 的目的和 CMD 一样,都是在指定容器启动程序及参数。ENTRYPOINT 在运行时也可以替代,不过比 CMD 要略显繁琐,需要通过 docker run 的参数 -entrypoint 来指定。 当指定了 ENTRYPOINT 后, CMD 的含义就发生了改变,不再是直接的运行其命令,而是将CMD 的内容作为参数传给 ENTRYPOINT 指令,换句话说实际执行时,将变为: "" 那么有了 CMD 后,为什么还要有 ENTRYPOINT 呢?这种 "" 有什么好处么?让我们来看几个场景。 场景一:让镜像变成像命令一样使用 假设我们需要一个得知自己当前公网 IP 的镜像,那么可以先用 CMD 来实现: FROM ubuntu:16.04RUN apt-get update \&& apt-get install -y curl \&& rm -rf /var/lib/apt/lists/*CMD [ "curl", "-s", "http://ip.cn" ] 假如我们使用 docker build -t myip . 来构建镜像的话,如果我们需要查询当前公网 IP,只需要执行: $ docker run myip 当前 IP:61.148.226.66 来自:北京市 联通 嗯,这么看起来好像可以直接把镜像当做命令使用了,不过命令总有参数,如果我们希望加参数呢?比如从上面的 CMD 中可以看到实质的命令是 curl ,那么如果我们希望显示 HTTP头信息,就需要加上 -i 参数。那么我们可以直接加 -i 参数给 docker run myip 么? $ docker run myip -idocker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"-i\\\": executablefile not found in $PATH\"\n". 我们可以看到可执行文件找不到的报错, executable file not found 。之前我们说过,跟在镜像名后面的是 command ,运行时会替换 CMD 的默认值。因此这里的 -i 替换了原来的 CMD ,而不是添加在原来的 curl -s http://ip.cn 后面。而 -i 根本不是命令,所以自然找不到。 那么如果我们希望加入 -i 这参数,我们就必须重新完整的输入这个命令: $ docker run myip curl -s http://ip.cn -i 这显然不是很好的解决方案,而使用 ENTRYPOINT 就可以解决这个问题。现在我们重新用 ENTRYPOINT 来实现这个镜像: FROM ubuntu:16.04RUN apt-get update \ && apt-get install -y curl \ && rm -rf /var/lib/apt/lists/*ENTRYPOINT [ "curl", "-s", "http://ip.cn" ] 这次我们再来尝试直接使用 docker run myip -i : $ docker run myip 当前 IP:61.148.226.66 来自:北京市 联通 $ docker run myip -iHTTP/1.1 200 OKServer: nginx/1.8.0Date: Tue, 22 Nov 2016 05:12:40 GMTContent-Type: text/html; charset=UTF-8Vary: Accept-EncodingX-Powered-By: PHP/5.6.24-1~dotdeb+7.1X-Cache: MISS from cache-2X-Cache-Lookup: MISS from cache-2:80X-Cache: MISS from proxy-2_6Transfer-Encoding: chunkedVia: 1.1 cache-2:80, 1.1 proxy-2_6:8006Connection: keep-alive 当前 IP:61.148.226.66 来自:北京市 联通 可以看到,这次成功了。这是因为当存在 ENTRYPOINT 后, CMD 的内容将会作为参数传给 ENTRYPOINT ,而这里 -i 就是新的 CMD ,因此会作为参数传给 curl ,从而达到了我们预期的效果。 场景二:应用运行前的准备工作 启动容器就是启动主进程,但有些时候,启动主进程前,需要一些准备工作。比如 mysql 类的数据库,可能需要一些数据库配置、初始化的工作,这些工作要在最终的 mysql 服务器运行之前解决。 此外,可能希望避免使用 root 用户去启动服务,从而提高安全性,而在启动服务前还需要以 root 身份执行一些必要的准备工作,最后切换到服务用户身份启动服务。或者除了服务外,其它命令依旧可以使用 root 身份执行,方便调试等。 这些准备工作是和容器 CMD 无关的,无论 CMD 为什么,都需要事先进行一个预处理的工作。这种情况下,可以写一个脚本,然后放入 ENTRYPOINT 中去执行,而这个脚本会将接到的参数(也就是 )作为命令,在脚本最后执行。比如官方镜像 redis 中就是这么做的: FROM alpine:3.4...RUN addgroup -S redis && adduser -S -G redis redis...ENTRYPOINT ["docker-entrypoint.sh"]EXPOSE 6379CMD [ "redis-server" ] 可以看到其中为了 redis 服务创建了 redis 用户,并在最后指定了 ENTRYPOINT 为 dockerentrypoint.sh 脚本。 #!/bin/sh...# allow the container to be started with `--user`if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@"fiexec "$@" 该脚本的内容就是根据 CMD 的内容来判断,如果是 redis-server 的话,则切换到 redis 用户身份启动服务器,否则依旧使用 root 身份执行。比如: $ docker run -it redis iduid=0(root) gid=0(root) groups=0(root) ENV 设置环境变量 格式有两种: ENV ENV = =... 这个指令很简单,就是设置环境变量而已,无论是后面的其它指令,如 RUN ,还是运行时的应用,都可以直接使用这里定义的环境变量。 ENV VERSION=1.0 DEBUG=on \ NAME="Happy Feet" 这个例子中演示了如何换行,以及对含有空格的值用双引号括起来的办法,这和 Shell 下的行为是一致的。 定义了环境变量,那么在后续的指令中,就可以使用这个环境变量。比如在官方 node 镜像 Dockerfile 中,就有类似这样的代码: ENV NODE_VERSION 7.2.0RUN curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \ && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \ && rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \ && ln -s /usr/local/bin/node /usr/local/bin/nodejs 在这里先定义了环境变量 NODE_VERSION ,其后的 RUN 这层里,多次使用 $NODE_VERSION 来进行操作定制。可以看到,将来升级镜像构建版本的时候,只需要更新 7.2.0 即可, Dockerfile 构建维护变得更轻松了。 下列指令可以支持环境变量展开: ADD 、 COPY 、 ENV 、 EXPOSE 、 LABEL 、 USER 、 WORKDIR 、 VOLUME 、 STOPSIGNAL 、 ONBUILD 。 可以从这个指令列表里感觉到,环境变量可以使用的地方很多,很强大。通过环境变量,我们可以让一份 Dockerfile 制作更多的镜像,只需使用不同的环境变量即可。 ARG 构建参数 格式: ARG [=] 构建参数和 ENV 的效果一样,都是设置环境变量。所不同的是, ARG 所设置的构建环境的环境变量,在将来容器运行时是不会存在这些环境变量的。但是不要因此就使用 ARG 保存密码之类的信息,因为 docker history 还是可以看到所有值的。 Dockerfile 中的 ARG 指令是定义参数名称,以及定义其默认值。该默认值可以在构建命令 docker build 中用 --build-arg = 来覆盖。 在 1.13 之前的版本,要求 -build-arg 中的参数名,必须在 Dockerfile 中用 ARG 定义过了,换句话说,就是 -build-arg 指定的参数,必须在 Dockerfile 中使用了。如果对应参数没有被使用,则会报错退出构建。从 1.13 开始,这种严格的限制被放开,不再报错退出,而是显示警告信息,并继续构建。这对于使用 CI 系统,用同样的构建流程构建不同的 Dockerfile 的时候比较有帮助,避免构建命令必须根据每个 Dockerfile 的内容修改。 VOLUME 定义匿名卷 格式为: VOLUME ["", ""...] VOLUME 之前我们说过,容器运行时应该尽量保持容器存储层不发生写操作,对于数据库类需要保存动态数据的应用,其数据库文件应该保存于卷(volume)中。为了防止运行时用户忘记将动态文件所保存目录挂载为卷,在 Dockerfile 中,我们可以事先指定某些目录挂载为匿名卷,这样在运行时如果用户不指定挂载,其应用也可以正常运行,不会向容器存储层写入大量数据。 VOLUME /data 这里的 /data 目录就会在运行时自动挂载为匿名卷,任何向 /data 中写入的信息都不会记录进容器存储层,从而保证了容器存储层的无状态化。当然,运行时可以覆盖这个挂载设置。比如: docker run -d -v mydata:/data xxxx 在这行命令中,就使用了 mydata 这个命名卷挂载到了 /data 这个位置,替代了 Dockerfile 中定义的匿名卷的挂载配置。 EXPOSE 声明端口 格式为 EXPOSE [...]。 EXPOSE 指令是声明运行时容器提供服务端口,这只是一个声明,在运行时并不会因为这个声明应用就会开启这个端口的服务。在 Dockerfile 中写入这样的声明有两个好处,一个是帮助镜像使用者理解这个镜像服务的守护端口,以方便配置映射;另一个用处则是在运行时使用随机端口映射时,也就是 docker run -P 时,会自动随机映射 EXPOSE 的端口。 此外,在早期 Docker 版本中还有一个特殊的用处。以前所有容器都运行于默认桥接网络中,因此所有容器互相之间都可以直接访问,这样存在一定的安全性问题。于是有了一个 Docker 引擎参数 --icc=false ,当指定该参数后,容器间将默认无法互访,除非互相间使用了 --links 参数的容器才可以互通,并且只有镜像中 EXPOSE 所声明的端口才可以被访问。这个 --icc=false 的用法,在引入了 docker network 后已经基本不用了,通过自定义网络可以很轻松的实现容器间的互联与隔离。 要将 EXPOSE 和在运行时使用 -p : 区分开来。 -p ,是映射宿主端口和容器端口,换句话说,就是将容器的对应端口服务公开给外界访问,而 EXPOSE 仅仅是声明容器打算使用什么端口而已,并不会自动在宿主进行端口映射。 WORKDIR 指定工作目录 格式为 WORKDIR 。 使用 WORKDIR 指令可以来指定工作目录(或者称为当前目录),以后各层的当前目录就被改为指定的目录,如该目录不存在, WORKDIR 会帮你建立目录。 之前提到一些初学者常犯的错误是把 Dockerfile 等同于 Shell 脚本来书写,这种错误的理解还可能会导致出现下面这样的错误: RUN cd /appRUN echo "hello" >

World.txt

If you run the Dockerfile as a build image, you will find that the / app/world.txt file cannot be found, or that its content is not hello. In fact, the reason is very simple: in Shell, two consecutive lines are the same process execution environment, so the memory state modified by the previous command will directly affect the latter command; while in Dockerfile, the execution environments of these two lines of RUN commands are fundamentally different and are two completely different containers. This is the error caused by a lack of understanding of Dockerfile's concept of building tiered storage.

As mentioned earlier, each RUN starts a container, executes commands, and then submits changes to the storage layer files. The execution of the first layer of RUNcd / app is just a change in the working directory of the current process, a change in memory, and the result does not cause any file changes. When you get to the second layer, you start a brand new container, which has nothing to do with the container in the first layer, so it is naturally impossible to inherit the memory changes in the previous layer.

So if you need to change the location of the working directory for future layers, you should use the WORKDIR directive.

USER specifies the current user

Format: USER

USER instructions are similar to WORKDIR in that they change the state of the environment and affect subsequent layers. WORKDIR is to change the working directory, and USER is to change the identity of the layer to execute commands such as RUN, CMD, and ENTRYPOINT. Of course, like WORKDIR, USER only helps you switch to a designated user, who must be established in advance, otherwise you can't switch.

RUN groupadd-r redis & & useradd-r-g redis redisUSER redisRUN ["redis-server"]

If you want to change your identity during execution as a script executed by root, for example, you want to run a service process as an established user instead of using su or sudo, these require cumbersome configuration and often make errors in an environment where TTY is missing. Gosu is recommended.

# establish a redis user and use gosu for another user to execute the command RUN groupadd-r redis & & useradd-r-g redis redis# to download gosuRUN wget-O / usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/1.7/gosu-amd64"\ & & chmod + x / usr/local/bin/gosu\ & & gosu nobody true# to set CMD And execute CMD with another user ["exec", "gosu", "redis", "redis-server"]

HEALTHCHECK health check

Format:

HEALTHCHECK [options] CMD: sets the command to check the health of containers

HEALTHCHECK NONE: if the basic image has a health check instruction, use this line to block its health check instruction

The HEALTHCHECK instruction, which tells Docker how to determine whether the state of the container is normal, is a new instruction introduced by Docker 1.12.

Before there is no HEALTHCHECK instruction, the Docker engine can only determine whether the container is abnormal by whether the main process in the container exits or not. In many cases this is fine, but if the program enters a deadlock or a dead loop, the application process does not exit, but the container is no longer able to provide services. Before 1.12, Docker will not detect this state of the container, so it will not be rescheduled, resulting in some containers that can no longer provide services but are still accepting user requests.

Since 1.12, Docker has provided the HEALTHCHECK instruction, through which one line of command is specified to determine whether the service state of the container main process is still normal, so as to reflect the actual state of the container.

When the HEALTHCHECK instruction is specified in a mirror, the container is started with it. The initial state will be starting. After the HEALTHCHECK instruction is checked successfully, it will become healthy. If it fails a certain number of times in a row, it will become unhealthy.

HEALTHCHECK supports the following options:

Interval=: the interval between two health check-ups. Default is 30 seconds.

Timeout=: the timeout for running the health check command. If this time is exceeded, this health check will be considered a failure. The default is 30 seconds.

Retries=: after a specified number of consecutive failures, the container status is regarded as unhealthy, with a default of 3 times.

Like CMD and ENTRYPOINT, HEALTHCHECK can only appear once. If more than one is written, only the last one takes effect.

The commands after HEALTHCHECK [options] CMD are formatted in the same format as ENTRYPOINT, divided into shell format and exec format. The return value of the command determines the success of the health check: 0: success; 1: failure; 2: keep, do not use this value.

Suppose we have an image that is the simplest Web service, and we want to add health check to determine whether its Web service is working properly. We can use curl to help determine whether the HEALTHCHECK of its Dockerfile can be written as follows:

FROM nginxRUN apt-get update & & apt-get install-y curl & & rm-rf / var/lib/apt/lists/*HEALTHCHECK-- interval=5s-- timeout=3s\ CMD curl-fs http://localhost/ | | exit 1

Here, we set a check every 5 seconds (the interval here is very short, so it should be relatively long). If the health check command does not respond for more than 3 seconds, it will be regarded as a failure, and use curl-fs http://localhost/ | | exit 1 as the health check command.

Use docker build to build this image:

$docker build-t myweb:v1.

Once built, we start a container:

$docker run-d-name web-p 80:80 myweb:v1

After running the image, you can see through docker container ls that the initial state is (health: starting):

$docker container lsCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES03e28eb00bd0 myweb:v1 "nginx-g'daemon off" 3 seconds ago Up 2 seconds (health: starting) 80/tcp, 443/tcp web

After waiting a few seconds, docker container ls again, and you will see the health status change in order to (healthy):

$docker container lsCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES03e28eb00bd0 myweb:v1 "nginx-g'daemon off" 18 seconds ago Up 16 seconds (health: healthy) 80/tcp, 443/tcp web

If the health check fails more than the number of retries, the status changes to (unhealthy).

To help troubleshoot, the output of the health check command (including stdout and stderr) is stored in the health state and can be viewed with docker inspect.

$docker inspect-- format'{{json .State.Health}} 'upbeat_allen | python-m json.tool {"FailingStreak": 0, "Log": [{"End": "2018-06-14T04:55:37.477730277-04:00", "ExitCode": 0, "Output": "\ n\ n\ nWelcome to nginx!\ n\ n body {\ nState.Health};\ n margin: 0 auto \ n font-family: Tahoma, Verdana, Arial, sans-serif;\ n}\ n\ nWelcome to nginx!\ n

If you see this page, the nginx web server is successfully installed and\ nworking. Further configuration is required.

\ n\ n

For online documentation and support please refer to\ nnginx.org.\ nCommercial support is available at\ nnginx.com.

\ n\ n

Thank you for using nginx.

\ n\ n\ n "," Start ":" 2018-06-14T04:55:37.408045977-04:00 "}, {" End ":" 2018-06-14T04:55:42.553816257-04:00 "," ExitCode ": 0," Output ":"\ n\ n\ nWelcome to nginx!\ n\ n body {\ n width: 35mm;\ nmargin: 0 auto;\ n font-family: Tahoma, Verdana, Arial, sans-serif " \ n}\ n\ nWelcome to nginx!\ n

If you see this page, the nginx web server is successfully installed and\ nworking. Further configuration is required.

\ n\ n

For online documentation and support please refer to\ nnginx.org.\ nCommercial support is available at\ nnginx.com.

\ n\ n

Thank you for using nginx.

\ n\ n\ n "," Start ":" 2018-06-14T04:55:42.480940888-04:00 "}, {" End ":" 2018-06-14T04:55:47.631694051-04:00 "," ExitCode ": 0," Output ":"\ n\ n\ nWelcome to nginx!\ n\ n body {\ n width: 35mm;\ nmargin: 0 auto;\ n font-family: Tahoma, Verdana, Arial, sans-serif " \ n}\ n\ nWelcome to nginx!\ n

If you see this page, the nginx web server is successfully installed and\ nworking. Further configuration is required.

\ n\ n

For online documentation and support please refer to\ nnginx.org.\ nCommercial support is available at\ nnginx.com.

\ n\ n

Thank you for using nginx.

\ n\ n\ n "," Start ":" 2018-06-14T04:55:47.557214953-04:00 "}, {" End ":" 2018-06-14T04:55:52.708195002-04:00 "," ExitCode ": 0," Output ":"\ n\ n\ nWelcome to nginx!\ n\ n body {\ n width: 35mm;\ nmargin: 0 auto;\ n font-family: Tahoma, Verdana, Arial, sans-serif " \ n}\ n\ nWelcome to nginx!\ n

If you see this page, the nginx web server is successfully installed and\ nworking. Further configuration is required.

\ n\ n

For online documentation and support please refer to\ nnginx.org.\ nCommercial support is available at\ nnginx.com.

\ n\ n

Thank you for using nginx.

\ n\ n\ n "," Start ":" 2018-06-14T04:55:52.63499573-04:00 "}, {" End ":" 2018-06-14T04:55:57.795117794-04:00 "," ExitCode ": 0," Output ":"\ n\ n\ nWelcome to nginx!\ n\ n body {\ n width: 35mm;\ nmargin: 0 auto;\ n font-family: Tahoma, Verdana, Arial, sans-serif " \ n}\ n\ nWelcome to nginx!\ n

If you see this page, the nginx web server is successfully installed and\ nworking. Further configuration is required.

\ n\ n

For online documentation and support please refer to\ nnginx.org.\ nCommercial support is available at\ nnginx.com.

\ n\ n

Thank you for using nginx.

\ n\ n\ n "," Start ":" 2018-06-14T04:55:57.714289056-04:00 "}]," Status ":" healthy "}

ONBUILD makes wedding clothes for others.

Format: ONBUILD.

ONBUILD is a special instruction that is followed by other instructions, such as RUN, COPY, etc., which are not executed during the current image build. It is executed only when the image is based on the current image to build the next level of mirror.

Other instructions in Dockerfile are prepared to customize the current image, but ONBUILD is prepared to help others customize themselves.

Suppose we want to make a mirror image of the application written by Node.js. We all know that Node.js uses npm for package management, and all dependencies, configuration, startup information, etc., are placed in the package.json file. After getting the program code, you need to npm install before you can get all the needed dependencies. You can then launch the application through npm start. Therefore, Dockerfile is generally written as follows:

FROM node:slimRUN mkdir / appWORKDIR / appCOPY. / package.json / appRUN ["npm", "install"] COPY. / app/CMD ["npm", "start"]

Put this Dockerfile in the root directory of the Node.js project, and after building the image, you can directly start the container to run. But what if we have a second Node.js project? All right, then copy this Dockerfile to the second project. What if there is a third project? Copy it again? The more copies of the file, the more difficult it is to version control, so let's continue to look at the maintenance of such scenarios.

If the first Node.js project finds a problem in the Dockerfile during the development process, such as typing the wrong word, or needs to install an additional package, then the developer fixes the Dockerfile, builds it again, and the problem is solved. The first project is fine, but what about the second project? Although the initial Dockerfile is copied and pasted from the first project, their Dockerfile is not fixed because the first project is fixed, and the Dockerfile of the second project is automatically fixed.

So can we make a basic image and then use this basic image for each project? In this way, the basic image is updated, and each project does not need to synchronize the changes of Dockerfile. After reconstruction, it inherits the update of the basic image? All right, well, let's see what happens. Then the Dockerfile above will become:

FROM node:slimRUN mkdir / appWORKDIR / appCMD ["npm", "start"]

Here we take out the build instructions related to the project and put them into the sub-project. Assuming that the name of the basic image is mynode, the Dockerfile in each project will be:

FROM my-nodeCOPY. / package.json / appRUN ["npm", "install"] COPY. / app/

After the basic image changes, each project uses this Dockerfile to rebuild the image and inherits the update of the basic image.

So, is the problem solved? No. To be exact, it's only half solved. What if there is something in this Dockerfile that needs to be adjusted? For example, npm install needs to add some parameters, what should I do? It is impossible to put this line of RUN into the basic image, because it involves the. / package.json of the current project, do you have to modify it one by one? Therefore, making the basic image in this way only solves the problem of changes in the first four instructions of the original Dockerfile, while the changes of the last three instructions are completely impossible to deal with.

ONBUILD can solve this problem. Let's rewrite the Dockerfile of the basic image with ONBUILD:

FROM node:slimRUN mkdir / appWORKDIR / appONBUILD COPY. / package.json / appONBUILD RUN ["npm", "install"] ONBUILD COPY. / app/CMD ["npm", "start"]

This time we go back to the original Dockerfile, but this time add the project-related instructions to ONBUILD so that these three lines are not executed when building the underlying image. Then the Dockerfile of each project becomes simply:

FROM my-node

Yes, it's the only line. When the image is built with this one-line Dockerfile in each project directory, the three lines of ONBUILD of the previous basic image will be executed, successfully copying the code of the current project into the image, and executing npm install for the project to generate the application image.

The above is all the content of the article "how to customize the Image of Docker Dockerfile". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report