In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
The architecture of this chapter:
The creation method of Docker data Management of Docker Network Communication of Docker
Docker image is not only the core technology of Docker, but also the standard format for application publishing. A complete Docker image can support the operation of a Docker container. During the whole use of Docker, after entering a stereotyped container, you can operate in the container. The most common operation techniques install application services in the container. If you want to migrate the installed services, you need to create a new image of the environment and the built services. Here are three ways to create a mirror:
First, the creation method of Docker image 1. Create based on an existing image
The docker commit command is mainly used to create images based on existing images. Its essence is to package the program running in a container and the running environment of the program to generate a new image.
Command format:
Docker commit [options] Container id/ name Warehouse name: [label] Common options:-m: description Information-a: author Information-p: stop the container from running during generation
The methods are as follows:
(1) use the image to create a new container and modify it.
[root@localhost ~] # docker images / / View the local Docker image REPOSITORY TAG IMAGE ID CREATED SIZEdocker.io/centos latest 0f3e07c0138f 6 weeks ago 220 MB [root@localhost ~] # docker run-- privileged-d-it-- name centos docker.io/centos init794df7dc4cebeb43afb2a1d7cf424578a4f10c2344bcdb7208d6632609ce087c// uses the centos image to generate a container called centos Let the container load the init daemon process as root [root@localhost] # docker exec-it centos / bin/bash / / specify a shell to enter the container [root@794df7dc4ceb /] # yum-y install vsftpd / / install a ftp service in the container [root@794df7dc4ceb /] # systemctl start vsftpd / / after installation is complete Start the service [root@794df7dc4ceb /] # exit / / exit the container
(2) create a new image using the "docker commit" command.
[root@localhost ~] # docker commit-m "vsftpd"-a "xxf" centos xxf:ftpsha256:ccba2c39b90a56373139196c3dc079b6df5c7f4f280bc35a7f1abf578962b52// generates a new image named lzj:ftp based on the container you just created
(3) after the creation is completed, check whether the local image already has a newly generated image.
[root@localhost] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZElzj ftp ccba2c39b90a 57 seconds ago 279 MBdocker.io/centos latest 75835a67d134 13 months ago 200MB2. Created based on a local template
An image can be generated by importing the operating system template file, which can be downloaded from the OPENVZ open source project at https://wiki.openvz.org/Download/template/precreated, giving priority to the link to the OPENVZ open source project.
In fact, just use "docker load"
< 文件名"将一个文件导入成镜像而已,这里就不多介绍了! 3.基于Dockerfile创建 除了手动生成docker镜像之外,还可以使用Dockerfile自动生成镜像。Dockerfile是由一组指令组成的文件,其中每条指令对应Linux中的一条命令,Docker程序将读取Dockerfile中的指令生成指定镜像。 Dockerfile结构大致分为4个部分:基础镜像信息、维护者信息、镜像操作指令和容器启动时候执行指令。Dockerfile每行支持一条指令,每条指令可携带多个参数,支持使用以"#"号开头的注释。 一个简单的小例子: [root@localhost ~]# vim DokerfileFROM centos#第一行必须指明基于的基础镜像MAINTAINER The CentOS project #维护该镜像的用户信息RUN yum -y updateRUN yum -y install openssh-serverRUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_configRUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_keyRUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key#镜像操作指令EXPOSE 22#开启22端口CMD ["/usr/sbin/sshd","-D"]#启动容器时执行指令 在编写Dockerfile时,有严格的格式需要遵循:第一行必须使用FROM指令指明所基于的镜像名称;之后再使用MAINTAINER指令说明维护该镜像的用户信息;然后是镜像操作相关指令,如RUN指令,每运行一条指令,都会给基础镜像添加新的一层;最后使用CMD指令来指定启动容器时要运行的命令操作。 Dockerfile有十几条命令可以用于构建镜像;其中常见的指令如下表: 案例: 1.建立工作目录 [root@localhost ~]# mkdir apache[root@localhost ~]# cd apache/ 2.创建并编写Dockerfile文件 [root@localhost apache]# vim DockerfileFROM docker.io/centosMAINTAINER The CentOS Projects RUN yum -y updateRUN yum -y install httpdEXPOSE 80ADD index.html /var/www/html/index.htmlADD run.sh /run.shRUN chmod 775 /run.shRUN systemctl disable httpdCMD ["/run.sh"]此Dockerfile文件使用的基础镜像是centos,所以要保证首先获取此基础镜像,使用"docker pull docker.io/centos" 获取镜像,之后的容器运行才会有效; 3.编写执行脚本内容 [root@localhost apache]# vim run.sh#!/bin/bashrm -rf /run/httpd/* //清理httpd的缓存exec /usr/sbin/apachectl -D FOREGROUND //启动apache服务//启动容器时,进程、脚本必须在前台启动 4.创建测试页面 [root@localhost apache]# echo "weclome" >Index.html / / create the home file [root@localhost apache] # ls Dockerfile index.html run.sh / / these three files are preferably in the same directory
5. Use Dockerfile to generate an image
After writing the Dockerfile and related content, you can use the "docker build" command to create a mirror.
Command format:
Docker build [option] path / / Common option "- t" specifies the label information of the image [root@localhost apache] # docker build-t httpd:centos. Use the Dockerfile that just created the key to automatically generate an image, and finally there must be a "." Represents the current path
6. Run the container with a new image
[root@localhost ~] # docker run-d-p 12345 httpd:centos ee9adf324443b006ead23f2d9c71f86d1a4eb73358fb684ee3a2d058a0ac4243// uses the newly generated image to load into the container to run / / "- p" option to achieve the mapping from local port 12345 to port 80 in the container [root@localhost ~] # docker ps-aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESee9adf324443 httpd:centos "/ run.sh" About a minute ago Up About a minute 0.0.0.0 About a minute ago Up About a minute 12345-> 80/tcp admiring_bardeen// to view the status of the image already in the container
Verify the access results:
7. Upload the image to the warehouse
With the increase in the number of images created by keys, there needs to be a place to save the images, which is the warehouse. At present, there are two kinds of warehouses: public warehouse and private warehouse. The most convenient thing is to use the public warehouse to upload and download. You don't need to register to download the image of the public warehouse, but you need to register when uploading. Here's how to create a private warehouse.
The methods are as follows:
(1) download the registry image on the server where the private repository is built
[root@localhost ~] # docker pull registry
(2) modify the URL of the private path specified in the configuration file, otherwise an error will be reported when uploading the image in the custom private repository, and restart the Docker service after modification.
[root@localhost ~] # vim / etc/sysconfig/docker
[root@localhost ~] # systemctl restart docker / / restart the docker service
(3) launch a container using the downloaded registry image. By default, the warehouse is stored in the / tmp/registry directory in the container. Use the "- v" option to mount the local directory to the / tmp/registry directory in the container, so that the image will be lost after the container is deleted.
[root@localhost ~] # docker run-d-p 5000 data/registry:/tmp/registry registryd5ecd77aa852df0c67935888009116325025cab49b7a4807196d251ce35a2b3b// local directories do not need to be created in advance
(4) use the "docker tag" command to mark the image docker.io/registry to be uploaded as
192.168.1.1:5000/registry . [root@localhost ~] # docker tag docker.io/registry 192.168.1.1:5000/registry [root@localhost ~] # docker push 192.168.1.1:5000/registry / / upload a local image to the server [root@localhost registry] # curl-XGET http://192.168.1.1:5000/v2/_catalog{"repositories":["registry"]} / / View the image of the repository class [root@localhost registry] # curl-XGET http://192.168.1.1:5000/v2/registry/tags/list{"name":"registry", "tags": ["latest"]} / / get the tag list of the image [root@localhost registry] # docker rmi-f 192.168.1.1:5000/registry / / delete the local image for testing [root@localhost registry] # docker pull 192.168.1.1:5000/registry / / from the local private warehouse | [root@localhost registry] # docker images | grep 192.168.1.1:5000/registry192.168.1.1:5000/registry latest f32a97de94e1 8 months ago 25.8 MB / / verify locally |
The private warehouse has been built and verified successfully.
II. Data management of docker
In docker, in order to easily view the data generated in the container or share the data among multiple containers, the data management operation of the container is involved. There are two main ways to manage the data in the docker container: data volume and data volume container.
1. Data volume
The data volume is a special directory for the container, which is located in the container. You can mount the host directory to the data volume, and the modification operation to the data volume is immediately visible, and updating the data will not affect the image, thus realizing the data migration between the host and the container. The use of the data volume is similar to the mount mount operation to the directory under Linux (Note: the host local directory is mounted to the container. For example: if the host local / data directory mounts / dev/sdb1, then the file system used in the specified directory in the container is also / dev/sdb1 when / data is used for data volume mapping. I don't know if you can understand how it works.
Mount the host directory as an example of a data volume:
Use the-v option to create a data volume (just create a directory when you run the container). When creating a data volume, mount the host's directory to the data volume to achieve data migration between the host and the container.
It should be noted that the path to the local directory of the host must be an absolute path. If the path does not exist, Docker will automatically create the corresponding path.
[root@localhost ~] # docker run-d-p 5000 5000-v / data/registry/:/tmp/registry docker.io/registry# this is a container running a private repository, where-p is the option for port mapping, which is not explained here. #-v is the directory mapping, mapping the local / data/registry/ directory to the / tmp/registry directory in the container. # then the content under the / tmp/registry directory in the container is the same as the / data/registry/ content of the host. [root@localhost ~] # df-hT / data/registry/ # first check the local / data/registry/ mounted file system type capacity available available mount point node4:dis-stripe fuse.glusterfs 80g 130m 80g 1% / data/registry [root@localhost ~] # docker exec-it a6bf726c612b / bin/sh # into the container of the private warehouse, the container does not have / bin/bash So / bin/sh is used. / # df-hT / tmp/registry/ # check and find that the file system mounted in this directory is the same as that mounted on the host, which means there is no problem. Filesystem Type Size Used Available Use% Mounted onnode4:dis-stripe fuse.glusterfs 80.0G 129.4M 79.8G 0% / tmp/registry
2. Data volume container
If you need to share some data between containers, the easiest way is to use a data volume container. A data volume container is an ordinary container that provides data volumes for mounting use by other containers. How to use it as follows: first, you need to create a container as a data volume container, and then use-- volumes-from to mount the data volumes in the data volume container when other containers are created.
Examples of creating and using container volumes:
[root@localhost ~] # docker run-itd-- name datasrv-v / data1-v / data2 docker.io/sameersbn/bind / bin/bash# create and run a container named datasrv, and create two data volumes: data1 and data2. D9e578db8355da35637d2cf9b0a3406a647fe8e70b2df6172ab41818474aab08 [root@localhost ~] # docker exec-it datasrv / bin/bash# enters the created container root@d9e578db8355:/# ls | grep data # check to see if there is a corresponding data volume data1data2 [root@localhost ~] # docker run-itd-- volumes-from datasrv-- name ftpsrv docker.io/fauria/vsftpd / bin/bash# runs a container called ftpsrv, and uses-- volumes-from to mount the data volume in the datasrv container to the new ftpsvr container. Eb84fa6e85a51779b652e0058844987c5974cf2a66d1772bdc05bde30f8a254f [root@localhost ~] # docker exec-it ftpsrv / bin/bash # enter the newly created container [root@eb84fa6e85a5 /] # ls | grep data # check whether the new container can see the data volume provided by datasrv data1data2 [root@eb84fa6e85a5 /] # echo "data volumes test" > / data1/test.txt # write a file to the data1 directory in the ftpsrv container for testing [root@eb84fa6e85a5 /] # exit # exit the container exit [root@localhost ~] # docker exec-it datasrv / bin/bash # enter the datasrv container providing data volumes root@d9e578db8355:/# cat / data1/test.txt # you can see the file just created in the ftpsrv container OK . Data volumes test
Note that the most important thing in the production environment is the reliability of storage and the dynamic scalability of storage. You must take this into account when making data volumes. What is more outstanding in this respect is the GFS file system. I just made a simple configuration above. If you are in a production environment, you must consider it carefully. For example, if you do the mirror volume container above, you can mount the GFS file system locally on the host. Then, when creating a mirror volume container, map the directory where the GFS is mounted to the mirror volume in the container, so that it is a qualified mirror volume container.
III. Docker network communication
1. Port mapping
Docker provides a mechanism for mapping container ports to hosts and container interconnection to provide network services for containers.
When starting the container, if you do not specify the corresponding port, the services in the container cannot be accessed through the network outside the container. Docker provides a port mapping mechanism to provide services in the container to external network access, which essentially maps the port of the host to the container, so that the port of the host can access the services in the container.
To achieve port mapping, you need to use the-P (uppercase) option when running the docker run command to achieve random mapping. Docker will generally randomly map to a port to access the open network port inside the container, but it is not absolute, and there are exceptions that will not be mapped to this range; you can also use the-p (lowercase) option to specify the port to be mapped when running the docker run command (commonly used this method).
Example of port mapping:
[root@localhost ~] # docker run-d-P docker.io/sameersbn/bind # Random Mapping Port 9b4b7c464900df3b766cbc9227b21a3cad7d2816452c180b08eac4f473f88835 [root@localhost ~] # docker run-itd-p 68:67 docker.io/networkboot/dhcpd / bin/bash# maps port 67 in the container to port 68 6f9f8125bcb22335dcdb768bbf378634752b5766504e0138333a6ef5c57b7047 of the host [root@localhost ~] # docker ps-a # check and find no problem CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES6f9f8125bcb2 docker.io/networkboot/dhcpd "/ entrypoint.sh / b..." 2 seconds ago Up 1 second 0.0.0.068-> 67/tcp Keen_brattain9b4b7c464900 docker.io/sameersbn/bind "/ sbin/entrypoint...." 4 minutes ago Up 4 minutes 0.0.0.0 minutes ago Up 32768-> 53/udp 0.0.0.0 32769-> 53/tcp, 0.0.0.0VR 32768-> 10000/tcp coc_gates# at this time Accessing port 68 of the host is equivalent to accessing port 67 of the first container. Accessing port 32768 of the host is equivalent to accessing port 53 of the container.
2. Container interconnection
Container interconnection is realized by establishing a special network communication tunnel between containers through the name of containers. To put it simply, a tunnel is established between the source container and the receiving container, and the receiving container can see the information specified by the source container.
When running the docker run command, use the-- link option to achieve interconnected communication between containers in the following format:
-- link name: alias # where name is the name of the container to which you want to connect and alias is the alias for this connection.
Container interconnection is performed through the name of the container. The-- name option creates a friendly name for the container, which is unique. If you have already named a container with the same name, you need to use the docker rm command to delete the previously created container with the same name when you want to use this name again.
Examples of container interconnection:
[root@localhost ~] # docker run-tid-P-- name web1 docker.io/httpd / bin/bash# run container web1c88f7340f0c12b9f5228ec38793e24a6900084e58ea4690e8a847da2cdfe0b [[root@localhost ~] # docker run-tid-P-name web2-- link web1:web1 docker.io/httpd / bin/bash# run container web2 And associate the web1 container c7debd7809257c6375412d54fe45893241d2973b7af1da75ba9f7eebcfd4d652 [root@localhost ~] # docker exec-it web2 / bin/bash # into the web2 container root@c7debd780925:/usr/local/apache2# cdroot@c7debd780925:~# ping web1 # to ping test web1 bash: ping: command not found # sorry, prompting that there is no ping command Download a croot @ c7debd780925:~#apt-get update # update root@c7debd780925:~#apt install iputils-ping # install ping command root@c7debd780925:~#apt install net-tools # this is to install the ifconfig command, you can not install I'm just taking a note here root@c7debd780925:~# ping web1 # and then ping testing web1 PING web1 (172.17.0.2) 56 (84) bytes of data.64 bytes from web1 (172.17.0.2): icmp_seq=1 ttl=64 time=0.079 ms64 bytes from web1 (172.17.0.2): icmp_seq=2 ttl=64 time=0.114 ms. # omitting part # ping is working So it can be said that the two containers must be interconnected. # if the new container web3 is created, you need to interconnect with web1 and web2 at the same time, the command is as follows: [root@localhost ~] # docker run-dit-P-name web3-- link web1:web1-- link web2:web2 docker.io/httpd / bin/bash# when running the container, associate web1 and web2. # the following is to enter web3 [root @ localhost ~] # docker exec-it web3 / bin/bashroot@433d5be6232c:/usr/local/apache2# cd#. The following is to install the ping command root@433d5be6232c:~# apt-get updateroot@433d5be6232c:~# apt install iputils-ping#. The following is to web1 respectively. Web2 for ping testing root@433d5be6232c:~# ping web1PING web1 (172.17.0.2) 56 (84) bytes of data.64 bytes from web1 (172.17.0.2): icmp_seq=1 ttl=64 time=0.102 ms64 bytes from web1 (172.17.0.2): icmp_seq=2 ttl=64 time=0.112 ms. # omit part of the content root@433d5be6232c:~# ping web2PING web2 (172.17.0.3) ) 56 (84) bytes of data.64 bytes from web2 (172.17.0.3): icmp_seq=1 ttl=64 time=0.165 ms64 bytes from web2 (172.17.0.3): icmp_seq=2 ttl=64 time=0.115 ms. # omit some content # OK No problem
-this is the end of this article. Thank you for reading-
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.