In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "what are the network models of Docker". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Default network
When you install Docker, it automatically creates three networks. You can list these networks using the following docker network ls command:
one
two
three
four
five
$docker network ls
NETWORK ID NAME DRIVER
7fca4eb8c647 bridge bridge
9f904ee27bf5 none null
Cf03ee007fb4 host host
These three networks are built into Docker, and when you run the container, you can use the-- network flag to specify which networks the container should be connected to.
This bridge network represents the network that exists in all Docker installations of docker0. Unless you specify it with the docker run-- network= option, the Docker daemon connects the container to this network by default.
When we use docker run to create a Docker container, we can use the-- net option to specify the network mode of the container. Docker can have the following four network modes:
Host mode: specified using-- net=host.
None mode: specified using-- net=none.
Bridge mode: specified using-- net=bridge, default setting.
Container mode: specified using-- net=container:NAME_or_ID.
The following describes the various network modes of Docker.
Host
Equivalent to the bridging mode in Vmware, it is on the same network as the host, but does not have a separate IP address. As we all know, Docker uses Linux's Namespaces technology to isolate resources, such as PID Namespace isolation process, Mount Namespace isolation file system, Network Namespace isolation network and so on. A Network Namespace provides an independent network environment, including network cards, routing, Iptable rules, etc. are isolated from other Network Namespace. A Docker container typically allocates a separate Network Namespace. However, if you start the container in host mode, the container will not get a separate Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card or configure its own IP, but will use the IP and port of the host.
For example, we started a Docker container with nginx applications in host mode on the machine 10.10.0.186 to 24, listening on the tcp80 port.
one
two
three
four
five
six
seven
eight
# run the container
$docker run-- name=nginx_host-- net=host-p 80:80-d nginx
74c911272942841875f4faf2aca02e3814035c900840d11e3f141fbaa884ae5c
# View Container
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74c911272942 nginx "nginx-g'daemon..." 25 seconds ago Up 25 seconds nginx_host
When we execute any similar ifconfig command in the container to view the network environment, all we see is the information on the host. On the other hand, to access the applications in the container, you can directly use 10.10.0.186 NAT 80 without any conversion, just like running directly in the host. However, other aspects of the container, such as file systems, process lists, and so on, are isolated from the host.
one
two
$netstat-nplt | grep nginx
Tcp 0 0 0.0.0 0 master 80 0.0.0 0. 0. 0 master
Container
After understanding the host pattern, the pattern is easy to understand. This mode specifies that the newly created container and an existing container share a Network Namespace rather than with the host. Instead of creating its own Nic and configuring its own IP, the newly created container shares IP, port range, and so on with a specified container. Similarly, apart from the network, the two containers are isolated, such as file systems, process lists, and so on. The processes of the two containers can communicate through the lo network card device.
None
This mode places the container on its own network stack, but does not make any configuration. In fact, this mode turns off the network capabilities of the container and is useful in two cases: the container does not need a network (for example, a batch task that only needs to write disk volumes).
Overlay
The docker1.7 code is refactored and the network part is written independently, so a new overlay network mode is added to docker1.8. Docker's control of network access is also gradually improving.
Bridge
Equivalent to the Nat mode in Vmware, the container uses a stand-alone network Namespace and connects to the docker0 virtual network card (default mode). Communicate with the host through the docker0 bridge and Iptables nat table configuration; bridge mode is the default network setting of Docker, which assigns Network Namespace, sets IP, and so on to each container, and connects the Docker container on a host to a virtual bridge. Let's focus on this mode.
Topology of Bridge mode Bridge mode
When Docker server starts, a virtual bridge named docker0 is created on the host, and the Docker container launched on this host is connected to the virtual bridge. A virtual bridge works like a physical switch so that all containers on the host are connected to a layer 2 network through the switch. Next, we need to assign IP to the container. From the private IP network segment defined by RFC1918, Docker will select a different IP address and subnet from the host to assign to docker0, and the container connected to docker0 will select an unoccupied IP from this subnet. For example, Docker will use the 172.17.0.0 ifconfig 16 segment and assign it to the docker0 bridge (docker0 can be seen by using the ifconfig command on the host, which can be thought of as the management interface of the bridge and used as a virtual network card on the host). The network topology in the stand-alone environment is as follows, and the host address is 10.10.0.186 Universe 24.
Docker: detailed explanation of Network Model
The process for Docker to complete the above network configuration is roughly as follows:
1. Create a pair of virtual network card veth pair devices on the host. Veth devices always appear in pairs. they form a channel for data, and data enters from one device and comes out of another. Therefore, veth devices are often used to connect two network devices.
2. Docker places one end of the veth pair device in the newly created container and names it eth0. The other end is placed in the host, named after a similar name such as veth75f9, and the network device is added to the docker0 bridge, which can be viewed by the brctl show command.
one
two
three
$brctl show
Bridge name bridge id STP enabled interfaces
Docker0 8000.02425f21c208 no
3. Assign an IP from the docker0 subnet to the container and set the IP address of the docker0 as the default gateway of the container.
one
two
three
four
five
six
seven
eight
nine
ten
eleven
twelve
thirteen
fourteen
fifteen
sixteen
seventeen
eighteen
nineteen
twenty
twenty-one
twenty-two
twenty-three
twenty-four
twenty-five
twenty-six
twenty-seven
# run the container
$docker run-- name=nginx_bridge-- net=bridge-p 80:80-d nginx
9582dbec7981085ab1f159edcc4bf35e2ee8d5a03984d214bce32a30eab4921a
# View Container
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9582dbec7981 nginx "nginx-g'daemon..." 3 seconds ago Up 2 seconds 0.0.0.0 daemon 80-> 80/tcp nginx_bridge
# View the container network
$docker inspect 9582dbec7981
"Networks": {
"bridge": {
"IPAMConfig": null
"Links": null
"Aliases": null
"NetworkID": "9e017f5d4724039f24acc8aec634c8d2af3a9024f67585fce0a0d2b3cb470059"
"EndpointID": "81b94c1b57de26f9c6690942cd78689041d6c27a564e079d7b1f603ecc104b3b"
"Gateway": "172.17.0.1"
"IPAddress": "172.17.0.2"
"IPPrefixLen": 16
"IPv6Gateway":
"GlobalIPv6Address":
"GlobalIPv6PrefixLen": 0
"MacAddress": "02:42:ac:11:00:02"
}
}
one
two
three
four
five
six
seven
eight
nine
ten
eleven
twelve
thirteen
fourteen
fifteen
sixteen
seventeen
eighteen
nineteen
twenty
twenty-one
twenty-two
twenty-three
twenty-four
twenty-five
twenty-six
twenty-seven
twenty-eight
twenty-nine
thirty
thirty-one
thirty-two
thirty-three
thirty-four
thirty-five
thirty-six
thirty-seven
thirty-eight
thirty-nine
forty
forty-one
$docker network inspect bridge
[
{
"Name": "bridge"
"Id": "9e017f5d4724039f24acc8aec634c8d2af3a9024f67585fce0a0d2b3cb470059"
"Created": "2017-08-09T23:20:28.061678042-04:00"
"Scope": "local"
"Driver": "bridge"
"EnableIPv6": false
"IPAM": {
"Driver": "default"
"Options": null
"Config": [
{
"Subnet": "172.17.0.0plus 16"
}
]
}
"Internal": false
"Attachable": false
"Ingress": false
"Containers": {
"9582dbec7981085ab1f159edcc4bf35e2ee8d5a03984d214bce32a30eab4921a": {
"Name": "nginx_bridge"
"EndpointID": "81b94c1b57de26f9c6690942cd78689041d6c27a564e079d7b1f603ecc104b3b"
"MacAddress": "02:42:ac:11:00:02"
"IPv4Address": "172.17.0.2 prime 16"
"IPv6Address":
}
}
"Options": {
"com.docker.network.bridge.default_bridge": "true"
"com.docker.network.bridge.enable_icc": "true"
"com.docker.network.bridge.enable_ip_masquerade": "true"
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0"
"com.docker.network.bridge.name": "docker0"
"com.docker.network.driver.mtu": "1500"
}
"Labels": {}
}
]
After the introduction of the network topology, let's take a look at how containers communicate in bridge mode.
Container communication in bridge mode
In bridge mode, containers connected to the same bridge can communicate with each other (you can also disable communication between them for security reasons by setting-icc=false in the DOCKER_OPTS variable so that only using-link enables the two containers to communicate).
Docker can enable communication between containers (meaning the default configuration-icc=true), that is, all containers on the host can communicate with each other without any restrictions, which can lead to a denial of service attack. Further, Docker can control communication between containers, containers, and the outside world through the-- ip_forward and-- iptables options.
The container can also communicate with the outside. If we take a look at the Iptable rules on the host, we can see this.
one
-A POSTROUTING-s 172.17.0 MASQUERADE 16!-o docker0-j MASQUERADE
This rule translates the source address to the address of the host network card from a packet with a source address of 172.17.0 Docker 16 (that is, a packet generated from the docker0 container) that is not issued from the host network card. This may not be easy to understand. Let me give you an example. Suppose the host has a network card with the eth0,IP address 10.10.101.105Universe 24 and the gateway 10.10.101.254. Ping Baidu (180.76.3.151) from a container on the host with an IP of 172.17.0.1). The IP package is first sent from the container to its default gateway, docker0, and when the packet arrives at docker0, it also arrives on the host. The routing table of the host is then queried and it is found that the packet should be sent from the host's eth0 to the host's gateway 10.10.105.254and24. The packet is then forwarded to eth0 and sent out of eth0 (ip_forward forwarding for the host should have been turned on). At this point, the above Iptable rule will take effect, SNAT the packet and change the source address to the address of the eth0. In this way, to the outside world, the packet is sent from 10.10.101.105, and the Docker container is not visible to the outside.
So how does the outside machine access the service of the Docker container? Let's first create a container containing the web application with the following command, mapping port 80 of the container to port 80 of the host.
one
$docker run-- name=nginx_bridge-- net=bridge-p 80:80-d nginx
Then look at the changes in the Iptable rules and find that there is one more rule:
one
-A DOCKER!-I docker0-p tcp-m tcp-- dport 80-j DNAT-- to-destination 172.17.0.2 tcp 80
This rule is to DNAT the tcp traffic received by the host eth0 with destination port 80, and send the traffic to 172.17.0.2 Docker 80, which is the Docker container we created above. Therefore, outsiders only need to visit 10.10.101.105v80 to access the services in the container.
In addition, we can customize the IP address, DNS and other information used by Docker, or even use our own defined bridge, but it still works the same way.
Custom network
It is recommended that you use a custom bridge to control which containers can communicate with each other, and you can automatically DNS resolve container names to IP addresses. Docker provides default network drivers for creating these networks, and you can create a new Bridge network, Overlay or Macvlan network. You can also create a network plug-in or remote network for complete customization and control.
You can create as many networks as you need, and you can connect containers to zero or more of these networks at any given time. In addition, you can connect and disconnect the running container on the network without restarting the container. When a container is connected to a plurality of networks, its external connection is provided in lexical order through the first non-internal network.
Next, let's introduce the built-in network driver for Docker.
Bridge
An bridge network is the most commonly used network type in Docker. A bridged network is similar to the default bridge network, but adds some new features and removes some old capabilities. The following example creates some bridged networks and performs some experiments on containers on those networks.
one
$docker network create-driver bridge new_bridge
After creating the network, you can see that a new bridge has been added (172.18.0.1).
one
two
three
four
five
six
seven
eight
nine
ten
eleven
twelve
thirteen
fourteen
fifteen
sixteen
seventeen
$ifconfig
Br-f677ada3003c: flags=4099 mtu 1500
Inet 172.18.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
Ether 02:42:2f:c1:db:5a txqueuelen 0 (Ethernet)
RX packets 4001976 bytes 526995216 (502.5 MiB)
RX errors 0 dropped 35 overruns 0 frame 0
TX packets 1424063 bytes 186928741 (178.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Docker0: flags=4163 mtu 1500
Inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
Inet6 fe80::42:5fff:fe21:c208 prefixlen 64 scopeid 0x20
Ether 02:42:5f:21:c2:08 txqueuelen 0 (Ethernet)
RX packets 12 bytes 2132 (2.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24 bytes 2633 (2.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Macvlan
Macvlan is a new attempt and a turning point in the real network virtualization technology. Linux implementations are very lightweight because, compared to traditional Linux Bridge isolation, they are simply associated with a Linux Ethernet interface or subinterface to separate networks and connect to physical networks.
Macvlan provides many unique features and there is plenty of room for further innovation and various models. Two advanced advantages of these methods are the positive performance of bypassing Linux bridges and the simplicity of fewer moving parts. Removing the bridge that traditionally resides between the Docker host NIC and the container interface leaves a very simple setting, including the container interface, which connects directly to the Docker host interface. Because there is no port mapping in these cases, external services can be easily accessed.
Sample usage of Macvlan Bridge pattern
In Macvlan Bridge mode, each container has a unique MAC address that tracks the MAC-to-port mapping of Docker hosts. The Macvlan driver network connects to the parent Docker host interface. Examples are physical interfaces, such as eth0, the subinterface eth0.10 (.10 represents VLAN 10) for 802.1q VLAN tagging, or even a bound host adapter that bundles two Ethernet interfaces into a single logical interface. The specified gateway is external to the host provided by the network infrastructure. Each Docker network in Macvlan Bridge mode is isolated from each other, and only one network can connect to the parent node at a time. Each host adapter has a theoretical limit, and each host adapter can connect to a Docker network. Any container in the same subnet can communicate with any other container in the same network without a gateway, macvlan bridge. The same docker network command applies to the vlan driver. In Macvlan mode, containers on separate networks cannot access each other without external process routing between two networks / subnets. This also applies to multiple subnets within the same terminal network.
In the following example, eth0 has the IP address 172.16.86.0 on the docker host network, the default gateway is 172.16.86.1, and the gateway address is the external router 172.16.86.1.
Note that for Macvlan bridging mode, the subnet value needs to match the interface of the NIC of the Docker host. For example, use the same subnet and gateway of the Docker host Ethernet interface specified by the-o parent= option.
The parent interface used in this example is on the eth0 subnet 172.16.86.0 parent= 24, and the container docker network in these containers also needs to be the same subnet as the parent-o parent=. A gateway is an external router on the network, not any ip masquerade or any other local agent.
The driver is specified with the-d driver_name option, in which case-d macvlan.
The parent node-o parent=eth0 is configured as follows:
one
two
three
$ip addr show eth0
3: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
Inet 172.16.86.250/24 brd 172.16.86.255 scope global eth0
Create a macvlan network and run several additional containers:
one
two
three
four
five
six
seven
eight
nine
ten
eleven
twelve
# Macvlan (- o macvlan_mode= Defaults to Bridge mode if not specified)
Docker network create-d macvlan\
-- subnet=172.16.86.0/24\
-- gateway=172.16.86.1\
-o parent=eth0 pub_net
# Run a container on the new network specifying the-ip address.
Docker run-net=pub_net-ip=172.16.86.10-itd alpine / bin/sh
# Start a second container and ping the first
Docker run-net=pub_net-it-rm alpine / bin/sh
Ping-c 4 172.16.86.10
Look at the container ip and routing table:
one
two
three
four
five
six
seven
eight
nine
ten
eleven
twelve
Ip a show eth0
Eth0@if3: mtu 1500 qdisc noqueue state UNKNOWN
Link/ether 46:b2:6b:26:2f:69 brd ff:ff:ff:ff:ff:ff
Inet 172.16.86.2/24 scope global eth0
Ip route
Default via 172.16.86.1 dev eth0
172.16.86.0/24 dev eth0 src 172.16.86.2
# NOTE: the containers can NOT ping the underlying host interfaces as
# they are intentionally filtered by Linux for additional isolation.
# In this case the containers cannot ping the-o parent=172.16.86.250
Sample usage of Macvlan 802.1q Trunk Bridge mode
VLAN (Virtual Local area Network) has long been the main means of virtualizing data center networks, and it is still the main means of isolating broadcasts in almost all existing networks.
The commonly used VLAN partition method is to divide the VLAN by port. Although the setting of this division VLAN is relatively simple, it is only suitable for the networking environment where the physical location of the terminal equipment is relatively fixed. With the popularity of mobile office, terminal devices may no longer access the switch through fixed ports, which will increase the workload of network management. For example, a user may access port 1 of the switch this time, and port 2 of the switch next time. Because port 1 and port 2 belong to different VLAN, if the user wants to access the original VLAN, the network manager must reconfigure the switch. Obviously, this partition method is not suitable for networks that need to change the topology frequently. MAC VLAN can effectively solve this problem, it divides the VLAN according to the MAC address of the terminal device. In this way, even if the user changes the access port, it is still in the original VLAN.
Mac vlan does not divide vlan by switch port. Therefore, a switch port can accept data from multiple mac addresses. If a switch port is to handle data from multiple vlan, set the trunk mode.
It is very common to run multiple virtual networks on a host at the same time. Linux networks have long supported VLAN tagging, also known as standard 802.1q, to maintain data routing isolation between networks. The Ethernet link to the Docker host can be configured to support 802.1q VLAN ID by creating Linux subinterfaces, each dedicated to a unique VLAN ID.
Create a Macvlan network
VLAN ID 10
one
two
three
four
five
$docker network create\
-- driver macvlan\
-- subnet=10.10.0.0/24\
-- gateway=10.10.0.253\
-o parent=eth0.10 macvlan10
Open a container that bridges Macvlan:
one
two
three
four
five
six
seven
eight
nine
ten
Docker run-- net=macvlan10-it-- name macvlan_test1-- rm alpine / bin/sh
/ # ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Inet 127.0.0.1/8 scope host lo
Valid_lft forever preferred_lft forever
21: eth0@if13: mtu 1500 qdisc noqueue state UNKNOWN
Link/ether 02:42:0a:0a:00:01 brd ff:ff:ff:ff:ff:ff
Inet 10.10.0.1/24 scope global eth0
Valid_lft forever preferred_lft forever
You can see that an address of 10.10.0.1 is assigned, and then take a look at the routing address.
one
two
three
/ # ip route
Default via 10.10.0.253 dev eth0
10.10.0.0/24 dev eth0 src 10.10.0.1
Then open a container that bridges Macvlan:
one
two
three
four
five
six
seven
eight
nine
ten
Docker run-- net=macvlan10-it-- name macvlan_test2-- rm alpine / bin/sh
/ # ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Inet 127.0.0.1/8 scope host lo
Valid_lft forever preferred_lft forever
22: eth0@if13: mtu 1500 qdisc noqueue state UNKNOWN
Link/ether 02:42:0a:0a:00:02 brd ff:ff:ff:ff:ff:ff
Inet 10.10.0.2/24 scope global eth0
Valid_lft forever preferred_lft forever
You can see that an address of 10.10.0.2 is assigned, and then you can ping each other between the two containers, and you can ping.
one
two
three
four
/ # ping 10.10.0.1
PING 10.10.0.1 (10.10.0.1): 56 data bytes
64 bytes from 10.10.0.1: seq=0 ttl=64 time=0.094 ms
64 bytes from 10.10.0.1: seq=1 ttl=64 time=0.057 ms
From the creation of the above two containers, you can see that the container IP is allocated from small to large according to the network segment when the network is created.
Of course, when creating a container, we can also use-- ip to manually assign an IP address to the container, as follows.
one
two
three
four
five
six
$docker run-- net=macvlan10-it-- name macvlan_test3-- ip=10.10.0.189-- rm alpine / bin/sh
/ # ip addr show eth0
24: eth0@if13: mtu 1500 qdisc noqueue state UNKNOWN
Link/ether 02:42:0a:0a:00:bd brd ff:ff:ff:ff:ff:ff
Inet 10.10.0.189/24 scope global eth0
Valid_lft forever preferred_lft forever
VLAN ID 20
You can then create a second VLAN network marked and quarantined by Docker hosts, and the macvlan_mode defaults to macvlan_mode=bridge, as follows:
one
two
three
four
five
six
$docker network create\
-- driver macvlan\
-- subnet=192.10.0.0/24\
-- gateway=192.10.0.253\
-o parent=eth0.20\
-o macvlan_mode=bridge macvlan20
After we have created the Macvlan network, we can see the relevant subinterfaces on the docker host, as follows:
one
two
three
four
five
six
seven
eight
nine
ten
eleven
twelve
thirteen
fourteen
$ifconfig
Eth0.10: flags=4163 mtu 1500
Ether 00:0c:29:16:01:8b txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18 bytes 804 (804.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Eth0.20: flags=4163 mtu 1500
Ether 00:0c:29:16:01:8b txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
In the / proc/net/vlan/config file, you can also see the relevant Vlan information, as follows:
one
two
three
four
five
$cat / proc/net/vlan/config
VLAN Dev name | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
Eth0.10 | 10 | eth0
Eth0.20 | 20 | eth0
This is the end of the content of "what are the Network models of Docker". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.