Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Docker Series 4:Docker Network Virtualization Foundation

2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

I. introduction of network virtualization technology

1. What is a virtualized network

A virtualized network is a network virtualized by the linux kernel. In fact, the linux kernel can simulate a variety of network devices.

Simulable network cable devices: simulated network cable devices appear in pairs, one can be used in the container, and one end is used in the switch

Simulable switch: the container is connected to this switch, and if the IP is in the same network segment, you can communicate with each other, as follows

There are many solutions for network virtualization, such as:

OVS

SDN

There can be multiple virtual switches on the physical machine, and the containers can be connected to different virtual switches. If the two virtual switches are not in the same network segment, how to communicate?

Forwarding is required at this point.

It can be forwarded by kernel.

It can also be forwarded with the help of iptables

2. How to realize the communication between the containers on two physical machines?

Method 1: it is bridging mode (such as vmware bridging)

In bridge mode, the physical Nic will be used as a switch

All virtual machines are connected to the switch simulated by this physical network card.

Then a network card is simulated for use by the physical machine, and the simulated network card is also connected to the switch.

When a packet is coming, the physical network will determine the target mac

If it is vm1, forward the packet to vm1

If it is the physical machine's own mac address, then forward the message to the internal virtual network card

Containers on different hosts can communicate by bridging, that is, using a physical network card as a switch.

Note:

Bridging mode is costly, especially in a large-scale deployment scenario, which will inevitably lead to too many hosts in the network, leading to storms.

Methods 2:NAT mode

In nat mode scenario, each container is still connected to a virtual switch, and the gateway of the container needs to point to the address of this switch.

When the container generates a packet, the data is sent to the virtual machine.

The virtual switch is simulated by the kernel, so the packet will be received by the kernel

The kernel will check and find out whether the target host is itself, and if not, it will send the packet from the network card, thus completing the sending of the data.

Note: the problem at this time is that although the data can be sent out, it can't come back.

Because the source address of the packet sent by C is C1, and this address is hidden by the switch, other hosts in the network can not find this address, so they can not come back.

If you want the data to reply normally, you need to change the source address of the packet to the address of the H1 host when sending the packet on the H1 host, and record this forwarding rule.

In this way, when you reply to the data, you only need to reply the data to the H1 host, and the H1 host knows that the packet needs to be forwarded to C1 by checking the address translation table.

Both of the above methods have their own problems.

Bridging mode: the container needs to be directly exposed to the network. When there are too many containers, the container forms a storm.

NAT mode: the container communicates based on nat and requires two address translations, which is too inefficient

Method 3: overlay network (Overlay network), which is based on tunnel mode

In tunnel mode, you also need to simulate a switch, and the container is also connected to the switch, as shown in the following figure

At this point, when sending data, the source IP source mac of the packet will not be modified, but a layer of ip and mac will be encapsulated on top of the original packet

2. Detailed explanation of Docker network

1. Three kinds of networks of Docker

[root@host1 ~] # docker network lsNETWORK ID NAME DRIVER SCOPE591c75b7dbea bridge bridge local386d8dc4beb8 host host localeb7b7cf29f29 none null local

2. Bridge is the bridge mode

This bridge is not a bridge for a physical network card, but a software-only switch called docker0.

You can see it with ip addr or ip link show.

[root@host1 ~] # ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000... 2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000... 3: ens37: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000... 4: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:a4:e8:44:11 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1 / 16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever [root@host1 ~] # ip link show1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00VO 0000VO 0000VO 0000VO brd 0000000000Vl00000000000000VOO: ens33: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:3f:bf:cf brd ff:ff:ff:ff:ff: Ff3: ens37: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:3f:bf:d9 brd ff:ff:ff:ff:ff:ff4: docker0: mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:a4:e8:44:11 brd ff:ff:ff:ff:ff:ff6: veth7c1728b@if5: mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether 9a:be:3b:60:d7 : 2e brd ff:ff:ff:ff:ff:ff link-netnsid 0

Start a container

[root@host1] # docker run-rm-name vh2-it busybox/ #

At this time, there is an extra network card device for the physical opportunity.

[root@host1 ~] # ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000... 2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000... 3: ens37: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000... 4: docker0: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:a4:e8:44:11 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1 qdisc noqueue state UP group default link/ether 02:42:a4:e8:44:11 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1 .17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:a4ff:fee8:4411/64 scope link valid_lft forever preferred_lft forever6: veth7c1728b@if5: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 9a:be:3b:60:d7:2e brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::98be:3bff:fe60:d72e/64 scope link valid_lft forever preferred_lft forever

This veth7c1728b@if5 is actually the part of the generated pair of network cards connected to the docker0 bridge.

That is, it is understandable that three containers plug the network cable into the switch.

The @ front part of the network card is connected to the virtual machine switch.

The @ part of the network card is the part of the container.

Check the Nic information in the container.

/ # ip addr

You can control and view the corresponding relationship between physical machine Nic and container Nic through brctl.

[root@host1 ~] # yum install bridge-utils-y [root@host1 ~] # brctl showbridge namebridge idSTP enabledinterfacesdocker08000.0242a4e84411noveth7c1728b

In fact, when using a Docker0 bridge, the system will automatically generate an iptables rule [POSTROUNTING rule]

[root@host1] # iptables-L-n-- line-t natChain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DOCKER all-- 0.0.0.0G 0 0.0.0.0G 0 ADDRTYPE match dst-type LOCALChain INPUT (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target Prot opt source destination 1 DOCKER all-0.0.0.0 Chain DOCKER 0! 127.0.0.0 Chain DOCKER 8 ADDRTYPE match dst-type LOCALChain POSTROUTING (policy ACCEPT) num target prot opt source destination 1 MASQUERADE all-172.17.0 Chain DOCKER 2 references) num target prot opt source destination 1 RETURN all-0.0.0.0Swiss 0 0.0.0.0Universe 0

There are three client sources for docker

Host: for example, visit a website in a container on the host computer

Other containers on the same host: two containers connected to the same switch on the same physical machine must be able to communicate

Other hosts or containers on other hosts: in this case, you need to communicate through nat or overlay network

3. Host mode

Six types of namespaces should be isolated in a container

User

Mount

Pid

Uts

Net

Ipc

Think about it: there are three containers. These three containers only isolate three namespaces user mount pid, while the other three are shared by the container. What would it be like, as shown in the figure below?

At this point, each container has its own file system, user information, and process information, and does not interfere with each other.

But the host names, network cards and protocol stacks of multiple containers are shared, that is, the host names and addresses are all the same.

In this way, when one container accesses resources on another container, 12.0.0.1 is fine.

For example, if a container installs apache, one installs mysql, and the other installs php, then the three hosts can communicate with each other based on the 127th address, because they are using the same protocol stack.

Let the container share the namespace of the physical machine

The container shares the Nic namespace of the physical machine, so that if the Nic is modified in the container, the Nic of the physical machine will be modified.

This pattern is the second type of docker network: host, which allows the container to use the network namespace of the host.

4. NULL mode

The third kind of docker network: null

If the network of a container is set to null network, that is to say, the container has no network, and sometimes you need to create a container that does not need to communicate with external hosts.

5. Classification diagram of container model.

Close the model container (closed)

Bridge solution Model Container (bridged), this bridge is a nat bridge, not a physical bridge, and this mode is the default mode

Alliance Model Container (joined)

Open model container (open)

6. View the network information when the container is created

Check out the bridged network model information

[root@host1 ~] # docker network inspect bridge. "Subnet": "172.17.0.00.16", "Gateway": "172.17.0.1". Com.docker.network.bridge.name: docker0,...

View the network type of the container

[root@host1] # docker container inspect vh2... "NetworkSettings": {"Bridge": "

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report