In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "how to connect Docker to the network". In the operation of actual cases, many people will encounter such a dilemma. Then let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
1. Container network foundation
As a container hosted on a host, we need to find a way to make it accessible to the external network so that we can use the services it provides. When docker starts, it creates a virtual network interface called docker0 on the host host. You can view it with: ifconfig command under linux and ipconfig under windods.
1.1 expose network port
When the network is running in Docker, you need to specify the port mapping through the-P or-p parameter to make it accessible from the outside. Port exposure through port mapping is a basic method for containers to provide services.
-P (uppercase): Docker randomly assigns an unused port within 49000 to 49900 on the host host and maps it to the open network port of the container (that is, the port configured through EXPOSE).
Here is an example of an official training program:
Docker run-d-P training/webapp python app.py uses the following command to see the port number assigned to run. Docker ps
The above 32768 is the randomly assigned port, which may be different at each run time, and 5000 is the port exposed by the container. In this way, the service can be accessed through http://localhost:32768.
-p (lowercase): it can specify that the port on the host host is mapped to the open port specified inside the container. The format is as follows:
Ip:hostPort:containerPort / / specifies that the ip and port are bound to the open port of the container
Docker run-d-p 192.168.0.1 purl 8000lv 5000 training/webapp python app.py
Ip::containerPort / / specifies that the random port of the ip is bound to the open port of the container
Docker run-d-p 192.168.0.1 training/webapp python app.py// is equivalent to omitting the port number. All two colons are concatenated.
HostPort:containerPort / / the specified ports of all network interfaces on the host host are bound (for example, you can use localhost, LAN ip, hostname, etc., plus ports to access the service).
Docker run-d-p 8000UR 5000 training/webapp python app.py
All configuration information for the container can be viewed with the following command:
Docker inspect container ID or name 2, data volume
Data in Docker can be stored in media similar to virtual machine disks, called data volumes (Data Volume) in Docker. Data volumes can be used to store data from Docker applications and to share data among Docker containers.
The data volume is presented to the Docker container in the form of a directory that supports sharing among multiple containers, and modifications do not affect the image. Data volumes that use Docker are similar to mounting a file system in a system using mount.
A data volume is a special directory that can be used by one or more containers for the following purposes.
1) bypass the "copy and write" system to achieve the performance of the local disk IO (for example, running a container, modifying the contents of the data volume in the container will directly change the contents of the data volume on the host, so it is the performance of the local disk IO, instead of writing a copy in the container first, and finally copying the modified contents of the container for synchronization. )
2) bypassing the "copy and write" system, some files do not need to be packaged into an image file in docker commit.
3) data volumes can be shared and reused among containers
4) data volumes can share data between hosts and containers (sharing a single file, or socket)
5) the data volume data change is directly modified.
6) data volumes are persistent until no container uses them. Even if the initial data volume container or the middle-tier data volume container is deleted, the data in it will not be lost as long as other containers use data volumes.
2.1 create and mount data volumes
There are two ways to create data volumes, as follows:
1. In Dockerfile, use VOLUMN instructions, such as VOLUME / var/lib/mysql
two。 When using docker run on the command line, use the-v argument to create a data volume and mount it into a container
Docker run-d-P-v / webapp / training/webapp python app.pyp
The above only defines a data volume of / webapp and does not specify a directory on the host. Docker will automatically assign a directory with a unique name to the data volume.
1) A data volume is a specially designated directory that uses the container's UFS file system to provide some stable features or data sharing for the container. Data volumes can be shared among multiple containers.
2) to create a data volume, you can create a data volume as long as the-v parameter is followed by the docker run command. Of course, you can also create multiple data volumes with multiple-v parameters. When the container with the data volume is created,
The data volume can be mounted in other containers with the-volumes-froms parameter, regardless of whether the container is running or not. You can also add one or more data volumes through the VOLUME instruction in Dockerfile.
3) if there is some data that you want to share among multiple containers, or if you want to use the data in some temporary containers, the best solution is to create a data volume container and then mount the data from the temporary container.
In this way, even if the first data volume container or the middle-tier data volume container is deleted, as long as other containers use the data volume, the data volume will not be deleted.
4) you cannot use commands such as docker export, save, cp, etc., to back up the contents of a data volume because the data volume exists outside the mirror. The backup method can be to create a new container, mount the data volume container and mount a local directory at the same time
Then back up the data volume of the remote data volume container to the mapped local directory through the backup command. As follows:
# docker run-rm-- volumes-from DATA-v $(pwd): / backup busybox tar cvf / backup/backup.tar / data
5) the directory of a local host can also be mounted on the container as a data volume, with the same docker run followed by the-v parameter, but the-v is no longer followed by a separate directory, it is in the format [host-dir]: [container-dir]: [rw | ro].
Host-dir is the address of an absolute path, and if host-dir does not exist, docker creates a new data volume, and if host-dir exists, but points to a directory that does not exist, docker also creates that directory and then uses that directory as the data source.
Docker run-d-P-- name webapp-v `pwd`: / webapp training/webapp python app.py / / use pwd to get the current absolute path
Note: if the / webapp directory already exists inside the container, its contents will be overwritten after the host directory is mounted. (but you don't usually write important data directly in the container, do you? )
It should be noted that Dockerfile does not support mounting a local directory to a data volume, that is, you cannot mount a local directory to a data volume using the VOLUME instruction in a Dockerfile file, mainly because the directory format varies from operating system to operating system. In order to ensure the portability of Dockerfile, this method is not supported and can only be mounted in the form of the-v parameter.
After mounting a directory, you can follow the operation permission of the directory. The default is RW permission. The format is as follows:
Docker run-d-P-v `pwd`: / webapp:ro training/webapp python app.pydocker run-d-P-v `pwd`: / webapp:rw training/webapp python app.py
In addition to mounting the host directory, you can also mount the files of the host as data volumes:
Docker run-- rm-it-v d:/test.txt:/test.txt ubuntu:latest / bin/bash / / the mounted file must exist, otherwise docker will create a directory with the same name, for example, a test.txt directory will be created if the above test.txt file does not exist.
2.1 data volume container
The old name of a data volume container refers to a container dedicated to mounting data volumes for reference and use by other containers. It is mainly used when multiple containers need to get data from one place. In practice, you need to name the data container, and once you have a definite container name, other containers that have dependencies on it can reference its data volume through-- volumes-from.
First, create a data volume container called test_dbdata, and create a new data volume / dbdata for the container. The specific actions are:
Docker run-d-v / dbdata-- name test_dbdata training/postgres
Next, create a container db1, which references the data volume of test_dbdata, as follows:
Docker run-d-- volumes-from=test_dbdata-- name db1 training/postgres
You can view container mount information through docker inspect test_dbdata/db1.
You can see from the inspect message that their data volumes are the same. It is important to note that once a data volume is declared, its life cycle has nothing to do with the container in which it is declared. When the container that declares it stops, the data volume dependency exists unless all containers that reference it are deleted and the data volume is explicitly deleted. In addition, when a container references a data volume container, the data volume container is not required to be running. A data volume container can be referenced by multiple containers.
Docker run-d-- volumes-from=test_dbdata-- name db2 training/postgres
In addition, the data volume container can be cascaded with warp references. For example, create a new container db3, which can reference the data volume of db1.
Docker run-d-- volumes-from=db1-- name db3 training/postgres
Whether it is a container that declares a data volume or a container that subsequently references the data volume, the stop and deletion of the container does not cause the data volume itself to be deleted. If you need to delete a data volume, you need to delete all containers that depend on it, and add the-v flag when you delete the last dependent container. Here, if test_dbdata, db1, and db2 have all been mailed, you can delete the data volume by adding the-v parameter when deleting db3.
Backup and recovery of docker rm-v db32.2 data
Using the data volume container, we can also backup and restore the data.
1. Backup and restore
Using the data volume container, we can back up the data of a data volume container.
First, create a data volume container, and do the following:
Docker run-d-v / dbdata-- name dbdata training/postgress
In this way, the data is saved to the / dbdata directory of the container. Here, the / dbdata directory is backed up locally, and the related operations are as follows:
Docker run-- volumes-from dbdata-v ${pwd}: / backup ubuntu tar cvf / backup/backup.tar / dbdata
Here, a new data volume container is created, and the dbdata data volume container is referenced. The current directory is mounted to the new data volume container / backup directory, so that the current directory is the same as the / backup directory. The data in the container is written to the / backup directory, that is, to the current directory of the host.
In fact, the process of recovery is to extract the data packaged by the original tar cvf to the data volume using tar xvf.
Note: in windows, if you use the command line of git-bash to run the command, pwd will not recognize the path and can write the absolute path directly. The format is: / d/docker_data d represents d disk. It is the original dvl / replaced by / d.
A very detailed description of data volumes, and backup recovery articles:
Docker Container Learning carding-Volume data volumes are connected using 2.3Container
Previously we used-P or-p to expose the container port for use by the outside world. There is another way for containers to provide services to the outside world-- container connections. The container connection includes the source container and the target container: the source container is the party providing the service and provides the specified service; after the target container is connected to the source container, it can use the services it provides. Container connections depend on the container name, so when you need to use a container connection, you first need to name the container, and then use the-- link parameter to connect.
The format of the connection is:-- link name:alias, where name is the name of the source container and alias is the alias for the connection. Here is an example of a connection:
The command above docker run-d-name dbdata training/postgres first creates a container for the database and connects to it using the following command. Docker run-d-P-- name web-- link dbdata:db training/webapp python app.py
Connection information can be viewed through docker inspect web.
In this way, the dbdata container provides services to the web container, but does not use-P or-p to expose the port, which makes the source container dbdata more secure. So how does the web container use dbdata's services?
Docker provides the target container with the following two ways to expose the services provided by the connection:
Environment variable
/ etc/hosts file
Each of them is described below.
1 environment variable
When the two containers are connected, Docker sets the relevant environment variables in the target container to use the services provided by the source container in the target container. The command format for connecting environment variables is _ NAME, where alias is an alias in the-- link parameter. For example, if the web container connects to the dbdata container with the parameter-- link dbdata:webdb, then there is an environment variable WEBDB_NAME=/web/webdb in the container.
In general, you can use the evn command to view the environment variables of a container, and the related code is:
Docker run-rm-name web-link dbdata:webdb training/webapp env
The following is the display result:
2 / etc/hosts file
View the / etc/hosts configuration file of the target container as follows:
Docker run-I-t-- rm-- name web2-- link dbdata:webdb training/webapp / bin/bash
You can see that the container connects to the ip address corresponding to the webdb, and the ip address is the address of the dbdata container. The container's operation on the webdb connection will be mapped to this address.
2.4 proxy connection
The container connections mentioned above are all on the same host. To achieve cross-host container connections, you can use ambassador mode to achieve cross-host connections, which is called proxy connection.
This is the end of the content of "how to connect Docker to the network". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.