Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Docker network

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Container Virtualization Network Foundation

After docker is installed, three networks are automatically provided:

Bridge: bridge network, Net bridge host: network interface of shared host none: there is only lo interface in the container There is no network card $docker network lsNETWORK ID NAME DRIVER SCOPE9ac63d7fc6e8 bridge bridge localb46032ae4b5f host host local60f69f2c7987 none null local$ to view the network information of the container

Use the inspect command to view the underlying information of the docker object. For example, you can view the information of the container. The information in the network section is as follows:

"Networks": {"bridge": {"IPAMConfig": null, "Links": null, "Aliases": n ll, "NetworkID": "9ac63d7fc6e871eb47e39f9ec4e3fda6a23cb95a906a9ddc6431ed716e000fa1", "EndpointID": "c8b64271d3d7ea5a0a8357c51fa5c80d398dbd07ad7e920792ebbaab628cb00d", "Gateway": "172.17.0.1" "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "," GlobalIPv6Address ":", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:02" "DriverOpts": null}}

Here you can check the IP address of the Nic inside the container.

Shared network namespace

Each container has its own set of 6 independent Namespaces:Mount, PID, User, UTS, IPC, Network.

There is also a solution where each container value has only its own Mount, PID, and User. The other three UTS, IPC and Network are shared. You have your own isolated namespaces, but you can also share some of them. Generally only share network communication related, hostname (UTS), interprocess communication (IPC), network (Network).

This brings a convenience. Now different containers share the network interface and use the same network. The lo interface inside each container is the same lo interface. As long as such a container sends a request to the local lo interface or 127.0.0.1, other containers that share the network interface can also receive it.

Share the namespace directly with the host

It is still the shared namespace above, and you can also share the namespace directly with the host. Then the container that shares the namespace with the host, the interface inside the container is the network interface of the host. The modification of the network by the container means the modification of the network of the host. This container has the privilege of managing the network.

This is the host type in the network type, which allows the container to use the network namespace of the host.

Four types of container network

There are four network models in Docker.

Closed bridge alliance open

Bridged containers

Bridged containers typically have two interfaces: a loopback interface and an Ethernet interface connected to a bridge device on the host.

When docker starts, a network bridge named docker0 is created by default, and the container created is a bridged container with an Ethernet interface bridged to docker0.

The docker0 bridge is a NET bridge, so bridged containers can access the external network through this bridge interface.

Closed containers

Do not participate in network communication, processes running in this container can only access the local loopback interface.

It is only suitable for scenarios where processes do not need network communication, such as backup, process diagnostics, and various offline tasks.

Set up the container network

When creating a container (run or create), you can use parameters to set the network of the container.

Network parameter

Use the-- nework parameter to specify the network mode of the container. The default value is default, which is the bridge mode.

Brideg mode

Start the container normally without using any network parameters:

$docker container run-name b1-rm-it busybox/ # ifconfigeth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 Carrier:0 collisions:0 txqueuelen:0 RX bytes:1032 (1.0KiB) TX bytes:0 (0.0B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns : 0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0B) TX bytes:0 (0.0B) / #

Start the container in the default network mode. If you start the container with a parameter-the network bridge effect is the same.

None mode

Use-- network none to start a container without any network:

$docker container run-- name b1-- rm-it-- network none busybox/ # ifconfig-alo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0B) TX bytes:0 (0.0B) / #

As shown here, there is only one lo interface and no other network interfaces.

Other network models

The rest are host mode and federated network, which need to be explained separately.

Hostname and domain name

When providing services on the network, the IP address is generally not provided directly. Instead, it provides the host name or domain name, which is not only convenient to remember, but also has some other conveniences. So the hostname of the Docker container is also an important network attribute.

Hostnam

The host name of the container is its ID:

/ # hostname6fb514e1fa3b/ #

You can use the-h parameter to set the hostname of the container when you start the container:

$docker container run-nameb1-rm-it-h b1.busybox busybox/ # hostnameb1.busybox/ #

DNS server

First, take a look at the default DNS server, which is the address of the gateway:

/ # cat / etc/resolv.conf # Generated by NetworkManagernameserver 192.168.1.1 / # nslookup-type=a baidu.comServer: 192.168.1.1Address: 192.168.1.1:53Non-authoritative answer:Name: baidu.comAddress: 220.181.38.148Name: baidu.comAddress: 123.125.114.144 /

Use the parameter to specify the DNS server to start the container, and then view the settings of the container's DNS:

$docker container run-name b1-rm-it-h b1.busybox-dns 223.5.5.5 busybox/ # cat / etc/resolv.conf nameserver 223.5.5.5 /

Search domain

Another parameter is dns-search, which is used to specify the search domain. This search field is the suffix that is automatically patched when the given name is not in the FQDN hostname format.

$docker container run-- name b1-- rm-it-h b1.busybox-- dns 223.5.5.5-- dns-search baidu.com busybox/ # cat / etc/resolv.conf search baidu.comnameserver 223.5.5.5 / # nslookup-type=a wwwServer: 223.5.5.5Address: 223.5.5.5:53Non-authoritative answer:www.baidu.com canonical name = www.a.shifen.comName: www.a.shifen.comAddress : 39.156.66.14Name: www.a.shifen.comAddress: 39.156.66.18/ #

When the host name is specified, the full domain name suffix is not given, but because the search domain is set, it is automatically completed.

Hosts file

In addition to the domain name server, you can also manage the hostname through the local hosts file:

Docker container run-- name b1-- rm-it-h busybox1.idx.net-- dns 223.5.5.5-- dns-search idx.net busybox/ # cat / etc/hosts127.0.0.1 localhost::1 localhost ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters172.17.0.2 busybox1.idx.net busybox1/

You can see that the hostname is written to the local hosts file by default. Two names are added here, the first is the full hostname, and the second is the part of the hostname with the suffix removed.

You can also use parameters to inject information into the hosts file:

$docker container run-name b1-rm-it-h busybox1.idx.net-dns 223.5.5.5-dns-search idx.net-add-host host1.idx.net:192.168.100.1-add-host host2.idx.net:192.168.100.2 busybox/ # cat / etc/hosts 127.0.0.1 localhost::1 localhost ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02:: 1 ip6-allnodesff02::2 ip6-allrouters192.168.100.1 host1.idx.net192.168.100.2 host2.idx.net172.17.0.2 busybox1.idx.net busybox1/ #

Add records in the format of (host:ip). This parameter is the list property, and if there are multiple records, the parameter is called multiple times.

Port Mapping-p

Docker0 is a NET bridge, so the container gets the VPC address (private network address).

From a topological point of view, the container is a host after the host NET service. If the container needs to provide services, define DNAT rules for it on the host machine.

The format of the-p option

-p

Maps the specified container port to a dynamic port of all host addresses

-p:

Map container port containerPort to host port hostPort

-p::

Maps the specified container port containerPort to the dynamic port of the host specified ip

There are two colons in the middle of the command, and the command equivalent to the following three variables omits the middle variable.

-p::

Maps the specified container port containerPort to the port hostPort of the host specified ip

Dynamic port refers to random port. The mapping result can be viewed by using docker port command.

-P (uppercase), randomly map ports to all network ports open in the internal container.

The network port opened here is set when the image is created, and you can also use the parameter-expose to specify the port to be opened when starting the container. Only by using the-P parameter, you need to specify which ports to open, either in the mirror or in the parameter-expose setting. When you use the-p parameter, you specify the port to be mapped, and there is no problem if the port to be opened is not set.

You can specify the protocol to use, and the default is tcp. Udp needs to specify

-p 127.0.0.1:5000:5000/udp

The-p parameter can be used multiple times to bind multiple ports

Test Port Mapping with httpd Mirror

Do not use port mapping:

$docker container run-dit-- name app1-- rm httpd:alpine 78e4e42fdd0c33a0410077731d26e418423a27827ab62e83dfe × × bca40f671 $curl http://172.17.0.2It workstations $curl http://127.0.0.1curl: (7) Failed connect to 127.0.0.1 Vol 80; refuse to connect to $

Because there is no port mapping, the container can access the external network through NAT, but it cannot be accessed by the external network, that is, the container does not expose any interfaces to the public network. The host can access the page through the private network address of the container, but cannot access the page through the interface address of the host, so the public network cannot access the page.

Use the-P parameter:

$docker container run-dit-- name app1-- rm-P httpd:alpine fde094cc000be912e68b8f6321f38d97c76cbabb54454da11c65e0aabd90dc1e$ docker container port app180/tcp-> 0.0.0.0 dit 32769$ curl http://127.0.0.1:32769It workstations $

This time, it can be accessed by using the loop port of a host, and it can also be accessed by using the address of the host directly with the browser. But the port is random, which is why you need to use the port command to see the randomly assigned port number.

Confederate and Open

Federation is the sharing of network namespaces between containers, while openness is the use of host network namespaces for container sharing. In principle, the two are the same.

Allied network

A federated container (joined containrs) is a container that uses the network interface of an existing container. The interface is shared among containers within the federation.

Federated containers share the same network namespace with each other, UTS, IPC, Network. Other namespaces are isolated, Mount, PID, User.

There is the possibility of port conflicts between federated containers. Therefore, the network type of this mode is usually used only when programs on multiple containers need program loopback interfaces to communicate with each other, or to monitor the network properties of an existing container.

When introducing the sharing of network namespaces, we also talked about the convenience of sharing, that is, multiple containers can use local loop backports to communicate with each other.

Start the first container

Using the interactive interface, open a container:

$docker container run-name b1-rm-it busybox/ # ifconfigeth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 Carrier:0 collisions:0 txqueuelen:0 RX bytes:516 (516.0 B) TX bytes:0 (0.0B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns : 0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0B) TX bytes:0 (0.0B) / #

Start the second container

The previous terminal is occupied, so another terminal still uses the interactive interface to open the second container. There is an extra parameter-- network, which is used to create federated containers:

$docker container run-name b2-rm-it-network container:b1 busybox/ # ifconfigeth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 Dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:656 (656.0 B) TX bytes:0 (0.0B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors : 0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0B) TX bytes:0 (0.0B) / #

If you look at the ip address and MAC address of the two containers, you can see that the network interface of the two containers is the same. The effect is equivalent to two processes on the same host in traditional mode. There are more isolations between containers than in traditional patterns.

Application scenario of Alliance Container

Federated containers can send requests directly to the local lo interface, and other containers of the federation can also receive this request, as if the containers in the federation are two processes running on the same host.

For example, first there is a container in brideg mode that provides a service for static Web pages. Requests for dynamic pages are then sent to another container for processing.

At this point, start the container for the second dynamic page. If the container is also in brideg mode, because the ip address is obtained dynamically, the static page container cannot determine which ip address to send the request to. At this point, if the two containers are federated networks, send the request directly to the local lo interface.

Open network

To use an open network, specify the parameter-- network as host:

[root@Docker] # docker container run-- name b3-- rm-it-- network host busybox/ # ifconfigdocker0 Link encap:Ethernet HWaddr 02:42:3C:BE:06:75 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:3cff:febe:675/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets: 71 errors:0 dropped:0 overruns:0 frame:0 TX packets:82 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4773 (4.6 KiB) TX bytes:7313 (7.1 KiB) eth0 Link encap:Ethernet HWaddr 00:15:5D:03:67:56 inet addr:192.168.24.170 Bcast:192.168.24.175 Mask:255.255.255. 240inet6 addr: fe80::4c95:4028:8e1:a795/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36462 errors:0 dropped:0 overruns:0 frame:0 TX packets:20392 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42167574 (40.2MiB) TX bytes:1953899 (1.8MiB) lo Link encap: Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: 1Comp128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:48 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3777 (3.6KiB) TX bytes:3777 (3.6KiB) / #

Do not exit the terminal, continue to open a httpd in the container:

/ # echo "Hello b3 Network host "> / var/www/index.html/ # httpd-h / var/www// # netstat-tnlActive Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0 0. 0 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.0.0.0LISTEN tcp * LISTEN tcp 00 192.168.24.170 LISTEN tcp 10010 0.0.0.0 LISTEN tcp 00: 80: : * LISTEN tcp 0:: 1:25: * LISTEN / #

After starting the httpd service, the local listening port is also checked and there is no problem.

Firewall issu

You can access this Web page directly on the host:

$curl 172.17.0.1Hello b3, network host$

However, the external network is still inaccessible, which is mainly due to the firewall problem of the host.

The service has been opened before, and there is no problem with access. This should be that the host releases all traffic accessing the container by default. This time, you need to access the host directly. Although the service is in the container, the network used is host, so you need to release the firewall policy.

Temporarily open http service on host

Firewall-cmd-- add-service=httpsuccess$ firewall-cmd-- list-allpublic (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: ssh dhcpv6-client http ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: $

This is only a temporary test, and the firewall firewalld service will return to its original state when it is restarted.

After the host firewall policy is open, you can use the browser to access the host's port 80 to open the page.

Application scenario of Open Container

The deployment method is simple and convenient for migration. Make full use of the advantages of the container, and can ensure that the program works on the host, at least through the host's network interface to provide services.

Running as an open container is similar to a process. All run multiple processes on the same machine. Processes are originally isolated from each other, but after using containers, you can also isolate file systems and users. Another benefit is the ease of deployment and migration. Those system-level managed processes that used to work as the first process of the host can be run as containers in the future.

Modify the default docker network

The content of the previous section is to use the three networks created by docker by default. The container selects one of the networks and sets the optional parameters when running the container.

The content of this section is not to use the default network, but to customize the network first. Customization can be implemented in two ways, one is to modify the default network, and the other is to create a new network completely.

Default bridge network

By default, docker uses the 172.17.0.1 Universe 16 network. The network type of this network is bridge, the corresponding network name is bridge, and the network card on the host is docker0.

Use the ifconfig command to view information about the docker0 bridge:

$ifconfigdocker0: flags=4099 mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:3cff:febe:675 prefixlen 64 scopeid 0x20 ether 02:42:3c:be:06:75 txqueuelen 0 (Ethernet) RX packets 71 bytes 4773 (4.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 82 bytes 7313 (7.1KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

The default dockr0 bridge interface can be modified. In addition, a new bridge interface network can be defined.

Modify the default docker0 bridge

The docker0 bridge is created when docker daemon starts, is automatically created based on default properties, or can be customized by modifying the configuration file.

The configuration file is / etc/docker/daemon.json, which was already used when adding the mirror accelerator. The main properties are as follows:

Bip: the ip address and mask of the docker0 bridge. Docker automatically configures a dhcpfixed-cidr for this network: limit the range of IP addresses assigned by the container fixed-cidr-v6: as above, the setting of ipv6 addresses mtu: the maximum big data packet size that can be passed. It is generally 1500, and it is best not to set it to keep the default. Default-gateway: default gateway default-gateway-v6: default ipv6 gateway dns: dns server address, you can specify multiple, this is an array type

Example of json configuration content:

{"bip": "192.168.10.1 2001:db8:abcd::89", "fixed-cidr": "192.168.10.128Compact 25", "fixed-cidr-v6": "2001:db8::/64", "mtu": 1500, "default-gateway": "192.168.1.1", "default-gateway-v6": "2001:db8:abcd::89", "dns": ["114.114.114.114" "223.5.5.5"]}

The core option is bip (bridge ip), which specifies the IP address of the docker0 bridge itself. Customize as needed, you can set only one bip, and keep the rest by default. Other options are automatically calculated based on bip, and some use the network properties of the host by default.

Actually modify the configuration of this machine

The modified configuration file is as follows:

{"registry-mirrors": ["http://hub-mirror.c.163.com"," https://docker.mirrors.ustc.edu.cn"], "bip": "192.168.101.1 bip", "fixed-cidr": "192.168.101.128x25", "dns": ["114.114.114.114", "223.5.5.5"]}

After restarting the service, check the docker0 bridge of the host:

$systemctl restart docker$ ifconfigdocker0: flags=4099 mtu 1500 inet 192.168.101.1 netmask 255.255.255.0 broadcast 192.168.101.255 inet6 fe80::42:3cff:febe:675 prefixlen 64 scopeid 0x20 ether 02:42:3c:be:06:75 txqueuelen 0 (Ethernet) RX packets 81 bytes 5445 (5.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 92 bytes 8125 (7.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

At this point, the network properties of docker0 have changed.

Create a container to view the network

Create a container and view the network properties in the container:

$docker container run-- name b4-- rm-it busybox/ # ifconfigeth0 Link encap:Ethernet HWaddr 02:42:C0:A8:65:80 inet addr:192.168.101.128 Bcast:192.168.101.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 Carrier:0 collisions:0 txqueuelen:0 RX bytes:516 (516.0 B) TX bytes:0 (0.0B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns : 0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0B) TX bytes:0 (0.0B) / # cat / etc/resolv.conf nameserver 114.114.114.114nameserver 223.5.5.5 / # exit$

Here you can confirm that the automatically acquired ip address also meets the setting requirements, and the address of the dns server is also customized.

Create a custom bridge

View existing networks:

$docker network lsNETWORK ID NAME DRIVER SCOPE8ec74d5bd709 bridge bridge locald086953087bb host host localfa0c7f1fb6ca none null local$

View network plug-ins

Expand here to see what types of networks docker supports.

Command the contents of the plug-in Plugins in docker info:

$docker infoPlugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog

In the Network plug-in, in addition to bridge, host, nul, there are overlay (overlay network) and macvlan (mac-based vlan virtual network). These two network types are not expanded. When these network plug-ins create the network below, they can be specified with the-d parameter, which defaults to bridge.

Create a network

Use the command docker network create command to create a network:

$docker network create-d bridge-- subnet "192.168.111.0 take 24" mybr17128a28bbbf39a6ca483ecad03d5d85c8179507aff66ced73ca8de5233f16fee [root@Docker ~] # docker network lsNETWORK ID NAME DRIVER SCOPE8ec74d5bd709 bridge bridge locald086953087bb host host local7128a28bbbf3 mybr1 bridge localfa0c7f1fb6ca none Null local [root@Docker ~] # ifconfigbr-7128a28bbbf3: flags=4099 mtu 1500 inet 192.168.111.1 netmask 255.255.255.0 broadcast 192.168.111.255 ether 02:42:1f:ff:fd:5d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0docker0: flags=4099 mtu 1500 inet 192.168.101.1 netmask 255.255.255.0 broadcast 192.168.101.255 inet6 fe80::42:3cff:febe:675 prefixlen 64 scopeid 0x20 ether 02:42:3c:be:06:75 txqueuelen 0 (Ethernet) RX packets 81 bytes 5445 (5.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 92 bytes 8125 (7.9 KiB ) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

The-d parameter can be omitted, and the default is bridge. More parameters can be viewed using the-- help option:

-- gateway: set gateway-- ip-range: equivalent to fixed-cdir setting, assign IP address from an IP range-- internal: restrict public network connection to this network-- ipv6: enable ipv6 network-- subnet: equivalent to bip setting, subnet

Specify the name of the network card

When viewed by ifconfig, the name displayed by the network card is automatically generated according to the ID number of the docker network. Use the-o parameter to specify when creating a network:

$docker network create-- subnet "192.168.112.0 mybr2b8a2639ce1baef83e54b5a0bca5ba6c7bbd2e6b607e62016c930350235bea965 24"-o "com.docker.network.bridge.name=docker1" mybr2b8a2639ce1baef83e54b5a0bca5ba6c7bbd2e6b607e62016c930350235bea965 $ifconfigbr-7128a28bbbf3: flags=4099 mtu 1500 inet 192.168.111.1 netmask 255.255.255.0 broadcast 192.168.111.255 ether 02:42:1f:ff:fd:5d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX Packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0docker0: flags=4099 mtu 1500 inet 192.168.101.1 netmask 255.255.255.0 broadcast 192.168.101.255 inet6 fe80::42:3cff:febe:675 prefixlen 64 scopeid 0x20 ether 02:42:3c:be:06:75 txqueuelen 0 (Ethernet) RX packets 81 bytes 5445 (5.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 92 bytes 8125 (7.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0docker1: flags=4099 mtu 1500 inet 192.168.112.1 netmask 255.255.255.0 broadcast 192.168.112.255 ether 02:42:74:95:cd:0b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

The name of the network card created this time is much more comfortable.

With regard to the-o parameter, there are the following:

Option is equivalent to describing the bridge name used by com.docker.network.bridge.name- to create Linux bridge com.docker.network.bridge.enable_ip_masqueradeip-masq enable IP camouflage com.docker.network.bridge.enable_iccicc enable or disable com.docker.network.bridge.host_binding_ipv4ip binding container port default bound IPcom.docker.network.driver.mtumtu settings container network MTU

It may not be easy to understand that there is no network foundation here, but we can also refer to the default bridge network settings:

$docker network inspect bridge [{"Name": "bridge", "Id": "80631c00ea3ece0280c786b90f5157be68fe76c26d52f4d9d870a7f5b59edde1", "Created": "2019-07-21T10:16:03.635792707+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": {"Driver": "default" "Options": null, "Config": [{"Subnet": "172.17.0.0 Attachable 16", "Gateway": "172.17.0.1"}]}, "Internal": false, "Attachable": false "Ingress": false, "ConfigFrom": {"Network": "}," ConfigOnly ": false," Containers ": {}," Options ": {" com.docker.network.bridge.default_bridge ":" true "," com.docker.network.bridge.enable_icc ":" true " "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500"}, "Labels": {}}] $

This is the default information for unmodified bridge bridges. In addition to the Options parameter, other parameters can also be referenced.

Use a custom network

This has been used before. Previously, there were only 3 optional networks provided by default, brideg, host, and none, but now the custom network created can also be used. The command docker network ls can be viewed and referenced using the-- network parameter to specify the name of the network (NAME):

$docker container run-- rm-it-- network mybr2 busybox/ # ifconfigeth0 Link encap:Ethernet HWaddr 02:42:C0:A8:70:02 inet addr:192.168.112.2 Bcast:192.168.112.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 Carrier:0 collisions:0 txqueuelen:0 RX bytes:1032 (1.0KiB) TX bytes:0 (0.0B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns : 0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0B) TX bytes:0 (0.0B) / #

You can tell from the IP address of the eth0 in the container that you are using the custom network you just created.

Remote management docker

The docker daemon is the Cramp S framework, and by default only listens to native UNIX sock files. The file is located in the / var/run/ directory:

$ls / var/run/*.sock/var/run/docker.sock$

It can be set to listen on the TCP port so that clients on other hosts on the network can connect to the local server.

Server configuration

The server needs to modify the configuration file to allow the service to listen on the network port. Add a hosts attribute to the configuration file / etc/docker/daemon.json:

{"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]}

The service needs to be restarted after modifying the configuration file. The local UNIX sock file is retained by default.

Client connection

To connect to the server, the client uses the-H or-- host parameters to add the server. You can view the help information by executing the docker command directly without any parameters:

H,-- host list Daemon socket (s) to connect to

When you use the client before, you don't take this parameter, that is, the UNIX sock file that connects to the local machine by default. With the parameters added, you can specify the server side of the connection.

Use the-H parameter, but the specified server is still a local UNIX sock file:

$docker-H unix:///var/run/docker.sock network ls

If network monitoring is turned on, you can do this:

$docker-H 127.0.0.1 version

Both the protocol and port number can be omitted, and the default is port 2375 of tcp.

Cannot specify more than one server

Look at the help, this parameter is a list, that is, you can call-H multiple times to add multiple servers. The parameters are designed in this way, but the logic of the program does not allow:

$docker-H unix:///var/run/docker.sock-H 127.0.0.1 imagesPlease specify only one-H$

The corresponding handler in the source code is found here:

Func getServerHost (hosts [] string, tlsOptions * tlsconfig.Options) (string, error) {var host string switch len (hosts) {case 0: host = os.Getenv ("DOCKER_HOST") case 1: host = hosts [0] default: return ", errors.New (" Please specify only one-H ")} return dopts.ParseHost (tlsOptions! = nil, host)}

The argument can only be 0 or 1, otherwise an error will be returned.

Set environment variabl

If you do not specify it with the-H parameter, you can also specify it through the environment variable DOCKER_HOST. The advantage is that you don't have to add the-H parameter to every connection.

Here are the commands for setup and verification:

$export DOCKER_HOST= "unix:///var/run/docker.sock" $echo $DOCKER_HOSTunix:///var/run/docker.sock$

The environment variable set here is temporary, and there is no re-login. Write ~ / .bashrc if you want the environment variable to take effect permanently.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report