In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly shows you "how to implement container virtualization network in docker". The content is simple and clear. I hope it can help you solve your doubts. Let me lead you to study and learn this article "how to implement container virtualization network in docker".
Overlay network (overlay network)
Docker Network bridge
After docker is installed, it will automatically have
[root@master chenzx] # docker network lsNETWORK ID NAME DRIVER SCOPE74997b46b6c7 bridge bridge localae048711b7aa host host local77190e2a8be4 none null local
Description:
Bridge: indicates a bridged network, but not a physical bridge. It creates a pure docker0 softswitch on the host (which can be seen by ifconfig). This docker0 can also be used as a network card. In other words, this docker0 acts as both a layer 2 switch device and a layer 2 network card device. If you don't give the docker0 address, then docker0 is just a switch; if you give docker0 an ip address, then the docker0 can be both a switch and a network card. Then the container we create on the host machine will automatically create a pair of network cards, one on the container and one on the virtual switch of the docker0 bridge. In addition, you can also see network cards like vetha1a84f through the ifconfig command. This is a pair of network cards corresponding to each activated container (docker ps view). Half of them are in the container, the other half is on the host computer, and is plugged into the docker0 bridge. You need to see it through the brctl command.
[root@master chenzx] # yum-y install bridge-utils [root@master chenzx] # brctl showbridge name bridge id STP enabled interfacesdocker0 8000.024221ea33da no vetha1a84fa [root@master chenzx] # ip link show1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1 link/loopback 0000VlV 000000Rd 0000 brd 0000Rd 0000VlV 0000VO: ens192: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 00:50:56:a2:56:4a brd ff:ff:ff:ff:ff:ff3: docker0: mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 02:42:21:ea:33:da brd ff:ff:ff:ff:ff:ff5: vetha1a84fa@if4: mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT link/ether 2a:cc:7c:a9:75:3e brd ff:ff:ff:ff:ff:ff link-netnsid 0
Docker0 bridge defaults to nat bridge. Each container generated will automatically generate an iptables rule:
[root@master chenzx] # iptables-t nat-vnLChain PREROUTING (policy ACCEPT 32550 packets, 2318K bytes) pkts bytes target prot opt in out source destination 5324 DOCKER all-- * * 0.0.0.0 ADDRTYPE match dst-type LOCALChain INPUT 0 ADDRTYPE match dst-type LOCALChain INPUT (policy ACCEPT 2486 packets) 502K bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 44775 packets, 2700K bytes) pkts bytes target prot opt in out source destination 00 DOCKER all-- * * 0.0.0.0 ADDRTYPE match dst-type LOCALChain POSTROUTING 0! 127.0.0.0 ADDRTYPE match dst-type LOCALChain POSTROUTING (policy ACCEPT 44775 packets) 2700K bytes) pkts bytes target prot opt in out source destination 00 MASQUERADE all-- *! docker0 172.17.0.0 MASQUERADE tcp 16 0.0.0.0 tcp dpt-- * 172.17.0.2 172.17.0.2 443 00 MASQUERADE tcp-- * * 172.17.0.2 172.17.0.2 tcp dpt:80Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN all-- docker0 * 0.0.0.0 / 0 0 0 DNAT tcp -! docker0 * 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 tcp dpt 0 tcp dpt:443 to:172.17.0.2:443 0 0 DNAT tcp -! docker0 * 0. 0. 0. 0. 0. 0. : 80 to:172.17.0.2:80
Look at the POSTROUTING chain, enter from any address (in *), as long as you are not going out from docker0 (! docker0), the source address comes from 172.17.0.0Uni16. No matter you reach any host (0.0.0.0), we have to do address camouflage (MASQUERADE), that is, automatic snat. It means to automatically select an address on the physical machine as the source address. So the default docker0 bridge is the nat bridge.
[root@master chenzx] # docker inspect container name / / you can see the container details
Disadvantages of bridge:
If a container on physical machine 1 wants to be accessed by another physical machine 2, it can only access the port mapped by the host ip+ container on physical machine 1. A physical machine can only have one 80 port, so when there are multiple containers with 80 ports, it is not easy. At this time, use overlay networkd to solve the problem.
[root@master chenzx] # docker network inspect bridge [{"Name": "bridge", "Id": "74997b46b6c7f3a130942bce4e26a9f1b691eb96b497aa7b5bec3d68405eeb70", "Created": "2019-06-25T05:32:31.482091683-04:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": {"Driver": "default" "Options": null, "Config": [{"Subnet": "172.17.0.0 Attachable 16", "Gateway": "172.17.0.1"}]}, "Internal": false, "Attachable": false "Ingress": false, "ConfigFrom": {"Network": "}," ConfigOnly ": false," Containers ": {" 1877cad503409040e026e1e7194751f0f23a627d9aa572aebfdc54ab679ec102 ": {" Name ":" xenodochial_galois "," EndpointID ":" 4336bb5aef3245eab6d79a5f67d51c8bd684b6e03ec34a60445cd5ab0ed65b4a " "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2 true 16", "IPv6Address": ""}}, "Options": {"com.docker.network.bridge.default_bridge": "true" "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0" "com.docker.network.driver.mtu": "1500"} "Labels": {}}] host [root@master chenzx] # docker network lsNETWORK ID NAME DRIVER SCOPE74997b46b6c7 bridge bridge localae048711b7aa host host local77190e2a8be4 none null local
Host means to let the container use the network namespace of the host.
A container (including a virtual machine and a physical machine) has the following six namespaces:
But we can let each container share the cyberspace of a host, which is host:
Use the ip netns command to manage the network namespace of the host
When managing network namespaces with ip netns (network name space), only network namespaces are isolated, and other namespaces (USER users, IPC, Mount ask Ajin system, UTS hosts, etc.) are shared.
[root@master chenzx] # ip netns add R1 [root@master chenzx] # ip netns add R2 [root@master chenzx] # ip netns listr2r1 [root@master chenzx] # ip netns exec R1 ifconfig-alo: flags=8 mtu 65536 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
See that there is only one network card device in the network namespace called lo.
We can also use ip link to create a pair of network cards:
[root@master chenzx] # ip link add name veth2.1 type veth peer name veth2.2 [root@master chenzx] # ip link show1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:002: ens192: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 00:50:56:a2:56:4a brd ff:ff:ff:ff:ff:ff3: docker0: mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 02:42:21:ea:33:da brd ff:ff:ff:ff:ff:ff5: vetha1a84fa@if4: mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT link/ether 2a:cc:7c:a9:75:3e brd ff:ff:ff:ff:ff:ff link-netnsid 06: veth2.2@veth2.1: mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000 link/ether 36:a6:f8:b4:d0:c6 brd ff : ff:ff:ff:ff:ff7: veth2.1@veth2.2: mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000 link/ether de:b7:a4:16:2b:c1 brd ff:ff:ff:ff:ff:ff
Veth2.1@veth2.2 says the other half of veth2.1 is veth2.2, and both ends are on our host.
Let's move the network device to another namespace.
[root@master chenzx] # ip link set dev veth2.2 netns r1
The above indicates moving the network device veth2.2 to the R1 network namespace. Note that a device can only belong to one namespace.
[root@master chenzx] # ip link show1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:002: ens192: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 00:50:56:a2:56:4a brd ff:ff:ff:ff:ff:ff3: docker0: mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 02:42:21:ea:33 : da brd ff:ff:ff:ff:ff:ff5: vetha1a84fa@if4: mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT link/ether 2a:cc:7c:a9:75:3e brd ff:ff:ff:ff:ff:ff link-netnsid 07: veth2.1@if6: mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000 link/ether de:b7:a4:16:2b:c1 brd ff:ff:ff:ff:ff:ff link-netnsid 1
It can be seen that the network card device veth2.2 on the host is gone.
[root@master chenzx] # ip netns exec R1 ifconfig-alo: flags=8 mtu 65536 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0veth2.2: flags=4098 mtu 1500 ether 36:a6:f8:b4:d0:c6 txqueuelen 1000 (Ethernet) RX packets 0 Bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Above, you can see that veth2.2 is a network card device in the R1 namespace.
Let's rename veth2.2 in the R1 namespace to eth0:
[root@master chenzx] # ip netns exec R1 ip link set dev veth2.2 name eth0 [root@master chenzx] # ip netns exec R1 ifconfig-aeth0: flags=4098 mtu 1500 ether 36:a6:f8:b4:d0:c6 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=8 mtu 65536 Loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Let's activate the veth2.1 Nic on the host:
[root@master chenzx] # ifconfig veth2.1 10.1.0.1 pur24 up [root@master chenzx] # ifconfig veth2.1veth2.1: flags=4099 mtu 1500 inet 10.1.0.1 netmask 255.255.255.0 broadcast 10.1.0.255 ether de:b7:a4:16:2b:c1 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 Bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Let's activate the other half of the network card veth2.2 of the network card veth2.1 on the host (currently called eth0 and in the R1 network namespace):
[root@master chenzx] # ip netns exec R1 ifconfig eth0 10.1.0.2 flags=4163 mtu 24 up [root@master chenzx] # ip netns exec R1 ifconfig eth0: flags=4163 mtu 1500 inet 10.1.0.2 netmask 255.255.255.0 broadcast 10.1.0.255 inet6 fe80::34a6:f8ff:feb4:d0c6 prefixlen 64 scopeid 0x20 ether 36:a6:f8:b4:d0:c6 txqueuelen 1000 (Ethernet) RX packets 17 bytes 1026 (1. 0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The eth0 device in the ping R1 network namespace on the host machine can communicate:
[root@master chenzx] # ping 10.1.0.2PING 10.1.0.2 (10.1.0.2) 56 (84) bytes of data.64 bytes from 10.1.0.2: icmp_seq=1 ttl=64 time=0.071 ms64 bytes from 10.1.0.2: icmp_seq=2 ttl=64 time=0.032 ms64 bytes from 10.1.0.2: icmp_seq=3 ttl=64 time=0.056 ms
Next, we move the veth2.1 network card on the host to the R2 network namespace.
[root@master chenzx] # ip link set dev veth2.1 netns R2 [root@master chenzx] # ifconfig / / found that veth2.1 is no longer available on the host [root@master chenzx] # ip netns exec R2 ifconfigveth2.1 10.1.0.3 ifconfigveth2.1 24 up [root@master chenzx] # ip netns exec R2 ifconfigveth2.1: flags=4163 mtu 1500 inet 10.1.0.3 netmask 255.255.255.0 broadcast 10.1.0.255 inet6 fe80: : dcb7:a4ff:fe16:2bc1 prefixlen 64 scopeid 0x20 ether de:b7:a4:16:2b:c1 txqueuelen 1000 (Ethernet) RX packets 13 bytes 1026 (1.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29 bytes 1982 (1.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Next, let's go to the R2 network namespace, go to the address of the network card in ping R1, and find that it is open:
[root@master chenzx] # ip netns exec R2 ping 10.1.0.2PING 10.1.0.2 (10.1.0.2) 56 (84) bytes of data.64 bytes from 10.1.0.2: icmp_seq=1 ttl=64 time=0.066 ms64 bytes from 10.1.0.2: icmp_seq=2 ttl=64 time=0.036 ms64 bytes from 10.1.0.2: four network models of icmp_seq=3 ttl=64 time=0.028 ms container
Run a closed container Do not communicate with the outside world [root@master chenzx] # docker run-- name T1-it-- network none-- rm busybox:latest/ # ifconfig-alo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 Carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0B) TX bytes:0 (0.0B) / # exit
As you can see, the container we created above is only lo, without any network cards. This is the closed network model.
The container created by default is the bridge network model
[root@master chenzx] # docker run-- name T1-it-- rm busybox:latestUnable to find image 'busybox:latest' locallylatest: Pulling from library/busybox8e674ad76dce: Pull complete Digest: sha256:c94cf1b87ccb80f2e6414ef913c748b105060debda482058d2b8d0fce39f11b9Status: Downloaded newer image for busybox:latestWARNING: IPv4 forwarding is disabled. Networking will not work./ # / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 Collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:0 (0.0B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier: 0 collisions:0 txqueuelen:1 RX bytes:0 (0.0B) TX bytes:0 (0.0B)
Note:-- rm means that the container is automatically deleted when it is closed.
You can see that the container created by default is ip 172.17.0.3, which means that it is a bridge model and a network segment with the docker0 switch on the host.
When creating a container, specify the hostname directly:
[root@master chenzx] # docker run-- name T1-it-- network bridge-hT1-- rm busybox:latestWARNING: IPv4 forwarding is disabled. Networking will not work./ # hostnamet1/ # cat / etc/resolv.conf / / see the host DNSnameserver 172.16.1.20
Description:-h is the specified host name.
Let's specify DNS when we create the container:
[root@master chenzx] # docker run-- name T1-it-- network bridge-HT1-- dns 114.114.114.114-- rm busybox:latestWARNING: IPv4 forwarding is disabled. Networking will not work./ # cat / etc/resolv.conf nameserver 114.114.114.114
Below, we specify the domain name and ip when creating the container:
[root@master chenzx] # docker run-name T1-it-network bridge-ht1-dns 114.114.114.114-dns-search czxin.com-add-host www.baidu,com:1.1.1.1-rm busybox:latestWARNING: IPv4 forwarding is disabled. Networking will not work./ # cat / etc/hosts127.0.0.1 localhost::1 localhost ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters1.1.1.1 www.baidu,com172.17.0.3 T1 Open Container Model
Use the-p port to map the port in the container to the port in the host.
[root@master chenzx] # docker run-- name myweb-- rm-p 0.0.0.0 name myweb 8080 nginx description: 0.0.0.0 represents all addresses on the host. If not written, the default is 0.0.0.0. The port 8080 on the host corresponds to port 80 in the container [root@master chenzx] # docker port myweb80/tcp-> 0.0.0.0name myweb 8080 [root@master chenzx] # docker kill mywebmyweb Container Model (joined containers)
Letting two containers share the same network namespace is called federated container.
[root@master chenzx] # docker run-name b1-it-- rm busybox/ # ifconfigeth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped: 0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:0 (0.0B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 Dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0B) TX bytes:0 (0.0B)
Open another window:
[root@master chenzx] # docker run-- name b2-- network container:b1-it-- rm busybox/ # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:0 (0.0B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0B) TX bytes:0 (0.0B) description:-- network container:b1 indicates that b2 container shares the network namespace of b1.
Thus, a web service is created in b2, and the page can be accessed using http://127.0.0.1 in b1.
Host network container model
[root@master chenzx] # docker run-name b2-network host-it-rm busybox/ # ifconfig docker0 Link encap:Ethernet HWaddr 02:42:43:84:8F:9A inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:43ff:fe84:8f9a/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:10703077 errors:0 dropped:0 overruns:0 frame:0 TX packets:8005286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2802551116 (2.6GiB) TX bytes:2896826107 (2.6GiB) ens192 Link encap:Ethernet HWaddr 00:50:56:A2:58:7C inet addr:172.16.22.100 Bcast:172.16.22.255 Mask:255.255 . 255.0 inet6 addr: fe80::9cf3:d9de:59f:c320/64 Scope:Link inet6 addr: fe80::e34:f952:2859:4c69/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4846834 errors:0 dropped:17 overruns:0 frame:0 TX packets:1920701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: 1970381702 (1.8 GiB) TX bytes:199949362 (190.6 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: 1Accord 128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:316 errors:0 dropped:0 overruns:0 frame:0 TX packets:316 errors:0 dropped:0 overruns:0 carrier:0 Collisions:0 txqueuelen:1 RX bytes:35923 (35.0 KiB) TX bytes:35923 (35.0 KiB) veth444969e Link encap:Ethernet HWaddr 7E:3C:4A:6A:52:65 inet6 addr: fe80::7c3c:4aff:fe6a:5265/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:41635 errors:0 dropped:0 overruns:0 frame:0 TX packets:34905 errors: 0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:21175416 (20.1 MiB) TX bytes:7734711 (7.3 MiB) veth49b8902 Link encap:Ethernet HWaddr 36:68:B9:A7:04:56 inet6 addr: fe80::3468:b9ff:fea7:456/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:13 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:378 (378.0 B) TX bytes:1026 (1.0 KiB)
As can be seen in the host network model, the ip in the container is the ip of the host. What is the use of this, it can take full advantage of the characteristics of the container, but also want to use the host network.
Change the default network segment of docer0
Reprint: http://blog.51cto.com/wsxxsl/2060761
The first step is to delete the original configuration
Sudo service docker stopsudo ip link set dev docker0 downsudo brctl delbr docker0sudo iptables-t nat-F POSTROUTING
Step 2: create a new bridge
Sudo brctl addbr docker0sudo ip addr add 172.17.0.1/16 dev docker0sudo ip link set dev docker0 up
The third step is to configure Docker files.
Note: here is to add the following configuration
Append cat / etc/docker/daemon.json # # to {"bip": "172.17.0.1 Universe 16"}
Custom docker0 bridge network property information: / etc/docker/daemon.json
{"registry-mirrors": ["http://hub-mirror.c.163.com"],"bip":" 172.17.0.1 http://hub-mirror.c.163.com"],"bip": 16 "," dns ": [" 114.114.114.114 "," 8.8.8.8 "]}
Note: bip is the ip address of docker 0. In the future, the address of the container will be the same as docker 0.
Step 4 restart docker
Systemctl restart docker or service restart docker
Create a custom bridge
[root@master chenzx] # docker network create-d bridge-- subnet "172.26.0.0swap 16"-- gateway "172.26.0.1" mybr04e70305bb5c793e457f57486aef0ac9ac0567432a73a1b6884898fc4c9a09d06 [root@master chenzx] # [root@master chenzx] # docker network lsNETWORK ID NAME DRIVER SCOPE863255cf4b6e bridge bridge localae048711b7aa host host local4e70305bb5c7 Mybr0 bridge local77190e2a8be4 none null local [root@master chenzx] # ifconfig br-4e70305bb5c7: flags=4099 mtu 1500 inet 172.26.0.1 netmask 255.255.0.0 broadcast 172.26.255.255 ether 02:42:01:cb:21:78 txqueuelen 0 (Ethernet) RX packets 10703186 bytes 2802559748 (2.6GiB) RX Errors 0 dropped 0 overruns 0 frame 0 TX packets 8005375 bytes 2896856389 (2.6GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0docker0: flags=4419 mtu 1500 inet 10.42.0.1 netmask 255.255.0.0 broadcast 10.42.255.255 inet6 fe80::42:43ff:fe84:8f9a prefixlen 64 scopeid 0x20 ether 02:42:43:84:8f:9a txqueuelen 0 (Ethernet) RX packets 10703186 bytes 2802559748 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8005375 bytes 2896856389 (2.6GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Rename br-4e70305bb5c7 to docker1
[root@master chenzx] # ifconfig br-4e70305bb5c7 down [root@master chenzx] # ip link set dev br-4e70305bb5c7 name docker1 [root@master chenzx] # ifconfig docker1 up [root@master chenzx] # ifconfig docker0: flags=4419 mtu 1500 inet 10.42.0.1 netmask 255.255.0.0 broadcast 10.42.255.255 inet6 fe80::42:43ff:fe84:8f9a prefixlen 64 scopeid 0x20 ether 02:42:43:84:8f:9a txqueuelen 0 (Ethernet) RX packets 10703186 bytes 2802559748 (2.6GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8005375 bytes 2896856389 (2.6 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0docker1: flags=4099 mtu 1500 inet 172.26.0.1 netmask 255.255.0.0 broadcast 172.26.255.255 ether 02:42:01:cb:21:78 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0B) ) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Let's create a container and join the mybr0 network
[root@master chenzx] # docker run-name afdfdfda-it-rm-net mybr0 busybox:latest
In the container ifconfig, you can see a network segment of the created container ip and mybr0.
These are all the contents of the article "how to implement Container Virtualization Network in docker". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.