Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Docker Cross-host Communication for Docker Network Management

2025-02-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Blog outline:

I. Preface

Second, the native network of Docker

III. Custom bridge network

IV. Overlay network

5. Macvlan network

6. The method of enabling the external network to access the container

I. Preface

Due to the popularity of docker technology, more and more enterprises are using docker virtualization technology. The purpose of using docker as a virtualization technology in enterprises is to enable containers in docker to provide services. Therefore, we must have an in-depth understanding of the network knowledge of docker in order to meet higher network requirements.

Second, the native network of Docker

When you install Docker, it automatically creates three networks. As follows:

[root@localhost ~] # docker network ls / / View the default network NETWORK ID NAME DRIVER SCOPEa38bd52b4cec bridge bridge local624b3ba70637 host host local62f80646f707 none null local of docker

These three networks are built into Docker, and when you run the container, you can use the "--network" option to specify which networks the container should connect to. If not specified, bridge mode is used by default.

For example:

Host mode: specified using-- net=host; none mode: specified using-- net=none; bridge mode: specified using-- net=bridge (default setting)

Here is a detailed description of these network models:

Although the docker model provides three network modes, there are actually four network modes!

1.host mode

If you start the container in host mode, the container will not get a separate Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card or configure its own IP, but will use the IP and port of the host.

Use the scene:

Because the network configuration is exactly the same as the docker host, the performance is good, but the inconvenience is that the flexibility is not high, and it is easy to conflict with the host. It is best to use a single container, but it is not recommended in general.

Example of creating a container that uses host network mode:

[root@localhost ~] # docker run-it-- name host-- network host busybox:latest / / use busybox image to create a container called host After the network enters the container in host mode / # ip a / /, you can see that the network in the container is exactly the same as that of the docker host. 1: lo: mtu 65536 qdisc noqueue qlen 1000 link/loopback 00lv 00lv 0000lv 00lv 00lv 00lv 00L inet 127.0.0.1max 8 scope host lo valid_lft forever preferred_lft forever inet6: 1scope host lo valid_lft forever preferred_lft forever inet6 128 scope host valid_lft forever preferred _ lft forever2: ens33: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:0c:29:66:72:13 brd ff:ff:ff:ff:ff:ff inet 192.168.1.1/24 brd 192.168.1.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::9f3:b94e:5f5d:8070/64 scope link valid_lft forever preferred_lft forever3: virbr0: mtu 1500 qdisc noqueue qlen 1000 link/ether 52:54: 00:e1:82:15 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever4: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 qlen 1000 link/ether 52:54:00:e1:82:15 brd ff:ff:ff:ff:ff:ff5: docker0: mtu 1500 qdisc noqueue link/ether 02:42:3c:06:f8:1d brd Ff:ff:ff:ff:ff:ff inet 172.17.0.1 ff:ff:ff:ff:ff:ff inet 16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever2.none mode

In none mode, the Docker container has its own Network Namespace, but there is no network configuration for the Docker container. In other words, the Docker container has only one loopback address and cannot communicate with the outside world, which is called an isolated network.

Use the scene:

None mode is called isolated network. Isolation means security and cannot communicate with the outside world. Similarly, containers using none mode cannot be accessed from the outside. Containers using this network mode can run in services such as CAPTCHA and CAPTCHA about security. It is generally used in scenarios with high security requirements!

Example of creating a container that uses none network mode:

[root@localhost ~] # docker run-it-- name none-- network none busybox:latest / / use busybox image to create a container called none The network adopts none mode / # ip a / / you can see that there is only one lo network card in the container 1: lo: mtu 65536 qdisc noqueue qlen 1000 link/loopback 00 brd 00 00 scope host lo valid_lft forever preferred_lft forever3.bridge 00 inet 127.0.0.1

Bridge mode is the default network setting for Docker. This mode assigns Network Namespace to each container, sets IP, and so on, and connects the Docker container on a host to a virtual network card. When Docker server starts, a virtual bridge named docker0 is created on the host, and the Docker container launched on this host is connected to the virtual bridge. A virtual bridge works like a physical switch so that all containers on the host are connected to a layer 2 network through the switch. Next, we need to assign IP to the container. From the private IP network segment defined by RFC1918, Docker will select a different IP address and subnet from the host to assign to docker0, and the container connected to docker0 will select an unoccupied IP from this subnet. For example, Docker will use the network segment 172.17.0. 0. 0. 0 and assign 172.17. 0. 1 to the docker0 bridge.

[root@localhost ~] # ifconfig docker0docker0: flags=4099 mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:3c:06:f8:1d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root @ localhost ~] # brctl show / / View bridged network bridge name bridge id STP enabled interfacesdocker0 8000.02423c06f81d no / / if no container for bridging mode has been created The default is empty.

Example of creating a container that uses bridge network mode:

[root@localhost ~] # docker run-itd-- name bridge busybox:latest / bin/sh// creates a container called bridge, if no network mode is specified The default is bridge mode [root@localhost ~] # docker exec-it bridge / bin/sh// into the bridge container / # ip A1: lo: mtu 65536 qdisc noqueue qlen 1000 link/loopback 00 brd 00 00 brd 00 scope host lo valid_lft forever preferred_lft forever6: eth0@if9: mtu 1500 qdisc noqueue link/ether 02:42:ac:11 00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2 scope global eth0 valid_lft forever preferred_lft forever// 16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever// can see that the address on the virtual network card eth0@if9 belongs to the same network segment as the docker0 network of the docker host. [root@localhost ~] # brctl showbridge name bridge id STP enabled interfacesdocker0 8000.02423c06f81d no veth811d20c// can see the bridging mode A new interface appears under the type of interface. When you create a container, there will be an interface / / this interface is for the container to create a virtual network card in the docker host Used to communicate between container and docker [root@localhost ~] # ifconfig veth811d20c / / check whether this virtual network card exists veth811d20c: flags=4163 mtu 1500 inet6 fe80::c035:95ff:febf:978b prefixlen 64 scopeid 0x20 ether c2:35:95:bf:97:8b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0) B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

When using bridge network mode, the working steps of docker are roughly as follows:

(1) create a pair of virtual network card veth pair devices on the host. Veth devices always appear in pairs. they form a channel for data, and data enters from one device and comes out of another. Therefore, veth devices are often used to connect two network devices

(2) Docker places one end of the veth pair device in the newly created container and names it eth0. The other end is placed in the host, named after a similar name such as veth811d20c, and the network device is added to the docker0 bridge, which can be viewed through the brctl show command (the above example has been verified)

(3) assign an IP from the docker0 subnet to the container, and set the IP address of docker0 as the default gateway of the container.

4.container (shared network protocol stack) mode

This mode specifies that the newly created container and an existing container share a Network Namespace rather than with the host. Instead of creating its own Nic and configuring its own IP, the newly created container shares IP, port range, and so on with a specified container. Similarly, apart from the network, the two containers are isolated, such as file systems, process lists, and so on. The processes of the two containers can communicate through the lo network card device.

Example of creating a container that uses container network mode:

[root@localhost ~] # docker run-it-- name container-- network container:a172b832b531 busybox:latest / / a172b832b531 is the ID number of the bridge container / # ip A1: lo: mtu 65536 qdisc noqueue qlen 1000 link/loopback 00lv 0000lv 0000lv 0000 brd 00lv 0000lv 0000lv 0000 inet 127.0.0.1Univ 8 scope host lo valid_lft forever preferred_lft forever8: eth0@if9: mtu 1500 qdisc noqueue link/ether 02Rank 42: Ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2 scope global eth0 valid_lft forever preferred_lft forever// 16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever// shows that the IP address of the container container is exactly the same as the IP address of the bridge container

The default bridge network mode is recommended for these four network modes!

III. Custom bridge network

If you are more careful, you can find that the default IP address of the created container is 172.17.0.0Universe 16, so can we customize a network segment for the container to use? The answer is definitely yes, in the following ways:

[root@localhost ~] # docker network create-d bridge my_net// creates a bridging network named my_net. If no network segment is specified, the default is 172.18.0.0and16. Automatically increment [root@localhost] # docker network ls / / by the network segment of docker0 to view the network type NETWORK ID NAME DRIVER SCOPE9fba9dc3d2b6 bridge bridge local624b3ba70637 host host local74544573aa67 my_net bridge local62f80646f707 none supported by docker Null local// can see The my_net you just created appears in the list [root@localhost ~] # docker run-itd-- name test1-- network my_net busybox:latest / bin/sh [root@localhost ~] # docker run-itd-- name test2-- network my_net busybox:latest / bin/sh// creates two containers [root@localhost ~] # docker exec-it test1/ bin/sh// into test1/ # ip A1: lo: Mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever11: eth0@if12: mtu 1500 qdisc noqueue link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0 Valid_lft forever preferred_lft forever// looks at its IP address It is found that it is indeed 172.18.0.0 / ping test2 / / the test passed the container name ping test2 Container PING test2 (172.18.0.3): 56 data bytes64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.079 ms64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.175 ms [root@localhost ~] # ifconfig br-74544573aa67// this virtual network card is when we created the my_net network. The resulting br-74544573aa67: flags=4163 mtu 1500 inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255 inet6 fe80::42:50ff:fec2:7657 prefixlen 64 scopeid 0x20 ether 02:42:50:c2:76:57 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0max / can be seen through the network segment of IP address

Benefits of a custom network:

You can communicate through the name of the container; customize the ContainerDNSserver function

The above method creates a network in bridging mode by default, and you can find that the network segment address is not customized by us.

Next, we create a network card by specifying a specific network segment. The methods are as follows:

[root@localhost ~] # docker network create-d bridge-- subnet 200.0.0.0On24-- gateway 200.0.0.1 my_net2// when customizing the address of the network mode The IP network segment and gateway information [root@localhost ~] # ifconfig br-0ca6770b4a10// must be clearly specified. This virtual network card is the br-0ca6770b4a10 generated when we created the my_net2 network: flags=4099 mtu 1500 inet 200.0.0.1 netmask 255.255.255.0 broadcast 200.0.0.255 ether 02:42:05:ba:8b:fc txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 bind / you can see that its IP address is our specified IP address [root@localhost ~] # docker run-itd-name test3-network my_net2-ip 200.0.0.100 busybox:latest [root@localhost ~] # docker run-itd-name test4-network my_net2-- ip 200.0.0.200 busybox:latest / / create two containers based on the network you just created and specify their fixed IP addresses [root@localhost ~] # docker exec-it test3 / bin/sh// enter the test3 container / # ip A1: lo: mtu 65536 qdisc noqueue qlen 1000 link/loopback 00lv 00lv 0000 brd 00lv 00lv 00lv 0000 inet 127.0.0.1L 8 scope host lo Valid_lft forever preferred_lft forever16: eth0@if17: mtu 1500 qdisc noqueue link/ether 02:42:c8:00:00:64 brd ff:ff:ff:ff:ff:ff inet 200.0.0.100 scope global eth0 valid_lft forever preferred_lft forever// 24 brd 200.0.0.255 scope global eth0 valid_lft forever preferred_lft forever// found that its IP address is indeed the IP address we just specified / # ping test4 / / The test found that PING test4 (200.0.0.200): 56 data bytes64 bytes from 200.0.0.200: seq=0 ttl=64 time=0.156 ms64 bytes from 200.0.0.200: seq=1 ttl=64 time=0.178 ms/ # ping test1ping: bad address' test1'// found that it could not communicate with the network created for the first time

Containers created using the same network can communicate with each other, but it is found that they cannot communicate with other containers, mainly because of iptables rules, which are automatically added when a docker network is created.

For example: try to clear the iptables rules, we can achieve the desired results. But this command is no less useful than "rm-rf / *" and obviously cannot be used in the real world!

Then you need to use the following method to implement it as follows:

[root@localhost ~] # docker network connect my_net2 test1// this command is to add a virtual network card (assigned by my_net2) to the test1 container [root@localhost ~] # docker exec-it test1/ bin/sh/ # ip A1: lo: mtu 65536 qdisc noqueue qlen 1000 link/loopback 00lv 0000lv 00lv 0000lv 0000 brd 00lv 00lv 0000Vue 00 inet 127.0.0.1L scope host lo valid_ Lft forever preferred_lft forever11: eth0@if12: mtu 1500 qdisc noqueue link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0 valid_lft forever preferred_lft forever20: eth2@if21: mtu 1500 qdisc noqueue link/ether 02:42:c8:00:00:02 brd ff:ff:ff:ff:ff:ff inet 200.0 .0.2According to 24 brd 200.0.0.255 scope global eth2 valid_lft forever preferred_lft forever//, you can find that there is indeed one more virtual network card. The network segment does belong to the same network segment as my_net2 / # ping test3PING test3 (200.0.0.100): 56 data bytes64 bytes from 200.0.0.100: seq=0 ttl=64 time=0.171 ms64 bytes from 200.0.0.100: seq=1 ttl=64 time=0.237 Ms ^ C-test3 ping statistics-2 packets transmitted, 2 packets received 0 packet lossround-trip min/avg/max = 0.171 packet lossround-trip min/avg/max 0.204 packets received 0.237 ms/ # ping test4PING test4 (200.0.0.200): 56 data bytes64 bytes from 200.0.0.200: seq=0 ttl=64 time=0.097 Ms ^ C-test4 ping statistics-- 1 packets transmitted, 1 packets received, 0% packet lossround-trip min/avg/max = 0.097 ms// 0.097 ms// test communicates with test3 and test4 Communication is normal.

Note: the test2 container cannot communicate with test3 or test4 at this time! If you need it to communicate, you also need to add a virtual my_net2 Nic address to test2 (just the command in the use case)!

Note:

Containers can communicate with each other by using the container name, but only if a custom network is used, such as the my_net and my_net2; created in the case. If the network segment of the network is specified when creating the custom network, the container can also be used to specify the IP address of the container. If the network segment of the network is not specified, the container IP address may not be specified. 4. Overlay network

To use overlay network, you need to deploy consul service in advance!

Consul: is a service grid (TCP/IP between microservices, responsible for network invocation, current limiting, circuit breaker and monitoring between services) solution, it is a distributed, highly available system, and it is easy to develop and use. It provides a fully functional control plane, the main features are: service discovery, health inspection, key storage, security service communication, multi-data center.

Use a small case to verify the characteristics of consul services!

1. Case environment

two。 Preparatory work

(1) turn off the firewall and SELinux (lab environment)

(2) change the host name to avoid conflicts

3. Case implementation

(1) Docker1

[root@Docker1 ~] # docker pull progrium/consul// download consul image [root@Docker1 ~] # docker run-d-p 8500 docker run-d-p 8500-h consul-- name consul-- restart=always progrium/consul-server-bootstrap//-d: run in the background; / /-p: map port 8500 in the container to port 8500 in the host; / /-h: indicates the hostname of the consul container; / /-- name: indicates the name of the running container / /-- restart=always: start with the start of the docker service; / /-server-bootstrap: add these two options to make it appear as master in the cluster environment; [root@Docker1 ~] # netstat-anpt | grep 8500tcp6 00:: 8500: * LISTEN 2442/docker-proxy / / make sure its port 8500 is listening

(2) Docker2

[root@Docker2 ~] # vim / usr/lib/systemd/system/docker.service / / write Docker's main configuration file 13 ExecStart=/usr/bin/dockerd-H unix:///var/run/docker.sock-H tcp://0.0.0.0:2376-- cluster-store=consul://192.168.1.1:8500-- cluster-advertise=ens33:2376// add the above to the original basis on line 13 The meaning of each configuration item is as follows: a socket for # / var/run/docker.sock:Docker # "- H tcp://0.0.0.0:2376": use the native tcp2376 port; # "--cluster-store=consul://192.168.1.1:8500": specify the IP and port of the docker server running the consul service # "--cluster-advertise=ens33:2376": collect network information from the local ens33 Nic through port 2376 and store it on consul [root@Docker2 ~] # systemctl daemon-reload [root@Docker2 ~] # systemctl restart docker / / restart the docker service

(3) Docker3

The operation of Docker3 is exactly the same as that of Docker2, so there are no more explanations here!

[root@Docker3 ~] # vim / usr/lib/systemd/system/docker.service 13 ExecStart=/usr/bin/dockerd-H unix:///var/run/docker.sock-H tcp://0.0.0.0:2376-- cluster-store=consul://192.168.1.1:8500-- cluster-advertise=ens33:2376 [root@Docker3 ~] # systemctl daemon-reload [root@Docker3 ~] # systemctl restart docker

(4) use a browser to access the web page of the consul service

As shown in the figure:

(5) add the Docker1 server to the consul cluster

[root@Docker1] # vim / usr/lib/systemd/system/docker.service 13 ExecStart=/usr/bin/dockerd-H unix:////var/run/docker.sock-H tcp://0.0.0.0:2376-- cluster-store=consul://192.168.1.1:8500-- cluster-advertise=ens33:2376 [root@Docker1 ~] # systemctl daemon-reload [root@Docker1 ~] # systemctl restart docker// explanation has been explained very clearly, so I won't explain it here.

Visit consul's web page again, as shown in the figure:

If you visit the web page during this process, if there is a "500" error page, you can delete the container running the consul service and re-innovate!

(6) create an overlay network

[root@Docker1 ~] # docker network create-d overlay my_olay / / create a voerlay network called my_olay / / No matter which docker host you operate on, [root@Docker1 ~] # docker network create-d overlay-- subnet 200.0.0.0 lv_olay// 24-- gateway 200.0.0.1 lv_olay// can also be used when creating an overlay network card Specify its IP network segment and gateway [root@Docker1 ~] # docker network ls / / View the networks supported by docker

And can also be seen on the other two docker servers, self-verification!

If you create a network on docker 1, you can see that its SPOCE (scope) defines global (global), so this means that other docker servers that join the consul service cluster can also see this network card!

If you do not specify a network segment when creating a network card, the default is 10.0.0.0, which meets the characteristics of a custom network (such as supporting communication between containers) because it is a custom network!

(7) create a container on different docker servers and verify that you can communicate!

[root@Docker1 ~] # docker run-itd-- name T1-- network lv_olay-- ip 200.0.0.10 busybox:latest// create a container named T1 on the docker1 server and specify its IP address [root@Docker2 ~] # docker run-itd-name T2-network lv_olay-ip 200.0.20 busybox:latest// create a container on docker2 and specify the IP address [root@Docker3 ~] # docker run -itd-- name T3-- network lv_olay-- ip 200.0.0.30 busybox:latest// create a container on docker3 and specify the IP address [root@Docker1 ~] # docker exec-it T1 / bin/sh// enter the container it creates on any docker server Conduct a test

As shown in the figure:

5. Macvlan network

Macvlan is a relatively new feature of linux kernel, and you can determine whether the current system supports it by the following methods:

[root@localhost ~] # modprobe macvlan [root@localhost ~] # lsmod | grep macvlanmacvlan 19239 0

If the first command reports an error, or if the second command does not return a message, the current system does not support macvlan and the kernel needs to be upgraded.

[root@docker01 ~] # modprobe 8021q / / load kernel module [root@docker01 ~] # modinfo 8021q / / if a message is returned, the 8021q module is enabled, if the previous command is not used

These two sets of commands support the same effect!

The above command mainly verifies whether the Linux kernel supports macvlan!

Macvlan allows you to configure multiple virtual network interfaces on one of the network interfaces of the host, which have their own independent mac addresses, or you can configure IP addresses to communicate. The virtual machine or container network under macvlan shares a broadcast domain with the host in the same network segment. Macvlan is similar to bridge, but because it eliminates the existence of bridge, it is relatively simple and efficient to configure and debug. In addition, macvlan itself perfectly supports VLAN.

If you want the container or virtual machine to be on the same network as the host and enjoy the advantages of the existing network stack, consider macvlan.

Macvlan is different from overlay network, the scope of overlay is global; and the scope of macvlan is local. Global type of network acts on a group of docker daemon clusters, while local type of network only acts on a single host.

The macvlan network created by each host is independent, and the macvlan network created by machine A does not affect the network on machine B.

Two hosts are configured in hybrid mode on the Nic, overlap exists on the macvlan of the two hosts, and the two macvlan networks have not been assigned the same IP. After the above three conditions are met, cross-host communication can also be achieved!

1. Environmental preparation

As shown in the figure:

two。 Preparatory work

(1) disable Linux Firewall and SELinux

(2) modify the host name

3. Case implementation

About macvlan single network communication here will not introduce, direct introduction macvlan multi-network communication! The methods are as follows:

(1) enable the network card hybrid mode

[root@dockerA ~] # ip link set ens33 promisc on / / enable the hybrid mode of the network card [root@dockerA ~] # ip link show ens33 / / query the network card already supports PROMISC2: ens33: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:66:72:13 brd ff:ff:ff:ff:ff:ff [root@dockerA ~] # modprobe 8021q / load 8021q kernel module [root@dockerA ~] # modinfo 8021q / / support 8021q kernel module if a message is returned

(2) create a virtual network card

Since it is best for a network card to create a macvlan network card, you need to create a virtual network card to meet the requirements!

[root@dockerA ~] # cd / etc/sysconfig/network-scripts/ [root@dockerA network-scripts] # sed-I's ActionStaticAccording to ManualAccord g 'ifcfg-ens33 [root@dockerA network-scripts] # cp-p ifcfg-ens33 ifcfg-ens33.10 [root@dockerA network-scripts] # cp-p ifcfg-ens33 ifcfg-ens33.20 [root@dockerA network-scripts] # vim ifcfg-ens33.10 BOOTPROTO=noneNAME=ens33.10DEVICE=ens33.10ONBOOT=yesIPADDR=192.168.10.1NETMASK=255.255.255.0GATEWAY=192.168.10.254VLAN=yes / / guarantee that it is not on the same network segment as the original physical Nic. And open the vlan support mode [root@dockerA network-scripts] # vim ifcfg-ens33.20BOOTPROTO=noneNAME=ens33.20DEVICE=ens33.20ONBOOT=yesIPADDR=192.168.20.1NETMASK=255.255.255.0GATEWAY=192.168.20.254VLAN=yes [root@dockerA network-scripts] # ifup ifcfg-ens33.10 [root@dockerA network-scripts] # ifup ifcfg-ens33.20 [root@dockerA network-scripts] # ifconfig ens33.10ens33.10: flags=4163 mtu 1500 inet 192.168.10.1 netmask 255.255.255.0 broadcast 192.168. 10.255 inet6 fe80::20c:29ff:fe66:7213 prefixlen 64 scopeid 0x20 ether 00:0c:29:66:72:13 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29 bytes 4052 (3.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dockerA network-scripts] # ifconfig ens33.20ens33.20: flags=4163 mtu 1500 Inet 192.168.20.1 netmask 255.255.255.0 broadcast 192.168.20.255 inet6 fe80::20c:29ff:fe66:7213 prefixlen 64 scopeid 0x20 ether 00:0c:29:66:72:13 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 28 bytes 3987 (3.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions Make sure the virtual network card is in effect.

The operation on the dockerB host is the same as that on the dockerA host (note that IP is not the same)! To ensure that the virtual network cards of the two docker hosts can communicate!

[root@dockerB network-scripts] # ifconfig ens33.10ens33.10: flags=4163 mtu 1500 inet 192.168.10.2 netmask 255.255.255.0 broadcast 192.168.10.255 inet6 fe80::20c:29ff:feb7:1bbd prefixlen 64 scopeid 0x20 ether 00:0c:29:b7:1b:bd txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29 bytes 4100 (4.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dockerB network-scripts] # ifconfig ens33.20ens33.20: flags=4163 mtu 1500 inet 192.168.20.2 netmask 255.255.255.0 broadcast 192.168.20.255 inet6 fe80::20c:29ff:feb7:1bbd prefixlen 64 scopeid 0x20 ether 00:0c:29:b7:1b:bd txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29 bytes 4100 (4.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

(3) create a macvlan network card

[root@dockerA] # docker network create-d macvlan-- subnet 172.16.10.0 parent=ens33.20 mac_net20// 24-- gateway 172.16.10.1-o parent=ens33.10 mac_ net10 [root @ dockerA ~] # docker network create-d macvlan-- subnet 172.16.20.0 parent=ens33.20 mac_net20// 24-- gateway 172.16.20.1-o parent=ens33.20 mac_net20// create a virtual network card and target its network segment, gateway / /-d: instruct the network card driver type -o parent: bind to that network card [root@dockerA ~] # docker network lsNETWORK ID NAME DRIVER SCOPE0a74599fef51 bridge bridge local624b3ba70637 host host localcb81dde7685d mac_net10 macvlan local983927dbcae8 mac_net20 macvlan local62f80646f707 None null local

When creating a virtual network card on the dockerB host, the command is exactly the same, and the specified network segment and name must be the same, because it is necessary to ensure that it can communicate through the virtual network card!

[root@dockerB] # docker network create-d macvlan-- subnet 172.16.10.0 parent=ens33.20 mac_net20 24-- gateway 172.16.10.1-o parent=ens33.10 mac_ net10 [root @ dockerB ~] # docker network create-d macvlan-- subnet 172.16.20.0 parent=ens33.20 mac_net20 24-- gateway 172.16.20.1-o parent=ens33.20 mac_net20

(4) create a container based on the created macvlan Nic to verify whether you can communicate!

Create containers for dockerA hosts:

[root@dockerA] # docker run-itd-- name box10-- network mac_net10-- ip 172.16.10.10 busybox [root@dockerA ~] # docker run-itd-- name box20-- network mac_net20-- ip 172.16.20.10 busybox

Create containers for dockerB hosts:

[root@dockerB] # docker run-itd-- name box11-- network mac_net10-- ip 172.16.10.20 busybox [root@dockerB] # docker run-itd-- name box21-- network mac_net20-- ip 172.16.20.20 busybox

Enter the container for verification:

[root@dockerA] # docker exec-it box10 / bin/sh/ # ping 172.16.10.20PING 172.16.10.20 (172.16.10.20): 56 data bytes64 bytes from 172.16.10.20: seq=0 ttl=64 time=0.653 ms64 bytes from 172.16.10.20: seq=1 ttl=64 time=0.966 ms [root@dockerA] # docker exec-it box20 / bin/sh/ # ping 172.16.20.20PING 172.16.20.20 (172.16.20. 20): 56 data bytes64 bytes from 172.16.20.20: seq=0 ttl=64 time=0.734 ms64 bytes from 172.16.20.20: seq=1 ttl=64 time=0.718 ms

Note: virtual machine bridging mode should be used when verifying the experimental environment, and when testing, you can only ping the ip of another docker host container, because the scope of the virtual macvlan created is local!

6. The method of enabling the external network to access the container

(1) manually specify the mapped port

[root@localhost ~] # docker run-itd-- name web1-p 90:80 nginx// maps the nginx service of a container to port 90 of the host

(2) randomly mapped ports

[root@localhost ~] # docker run-itd-- name web2-p 80 nginx / / if there is only one port after-p, the port in the container (the host randomly maps a port) starts at port 32768

(3) Map all the ports in the container to the host machine

[root@localhost ~] # docker run-itd-- name web4-P nginx / / Note: uppercase Phands / randomly map ports from host to container, all exposed ports in the container will be mapped one by one

The above operation, there is no problem with personal verification, there will be no screenshot demonstration here!

-this is the end of this article. Thank you for watching-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report