Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Docker network model

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

When we use docker run to create a Docker container, we can use the-- net option to specify the network mode of the container. Docker has the following four network modes:

1.host mode, specified using-- net=host.

2.container mode, specified using-- net=container:NAME_or_ID.

3.none mode, specified using-- net=none.

4.bridge mode, specified using-- net=bridge, default setting

5.overlay mode: use-- net=overlay (overlay network, cross-host communication)

I. host mode

The network used by Docker is actually the same as the host. The network card ip seen in the container is the ip on the host.

As we all know, Docker uses Linux's Namespaces technology to isolate resources, such as PID Namespace isolation process, Mount Namespace isolation file system, Network Namespace isolation network and so on. A Network Namespace provides an independent network environment, including network cards, routing, Iptable rules, etc. are isolated from other Network Namespace. A Docker container typically allocates a separate Network Namespace. However, if you start the container in host mode, the container will not get a separate Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card, configure its own IP, etc., but will use the host's IP and port

Docker run-it-- net=host centos / bin/bash

You can see that the Nic in the container directly multiplexes the host network. This mode disables the network isolation of the Docker container.

II. Container mode

Multiple containers see the same ip using a common network.

After understanding the host pattern, the pattern is easy to understand. This mode specifies that the newly created container and an existing container share a Network Namespace rather than with the host. Instead of creating its own Nic and configuring its own IP, the newly created container shares IP, port range, and so on with a specified container. Similarly, apart from the network, the two containers are isolated, such as file systems, process lists, and so on. The processes of the two containers can communicate through lo network card devices.

Example:

Docker run-it-- net=container:169c875f4ba0 centos / bin/bash (network for reusing 169c875f4ba0 containers)

You can see that it is the same as the network environment of container 169c875f4ba0.

Third, none mode

No networks are configured in this mode.

This model is different from the first two. In this mode, the Docker container has its own Network Namespace, but there is no network configuration for the Docker container. In other words, the Docker container does not have network card, IP, routing and other information. We need to add network cards, configure IP, etc., for the Docker container.

Example:

Docker run-it-- net=none centos / bin/bash

As you can see, the Nic information is not configured, so you need to configure it manually.

IV. Bridge mode

Bridge mode is the default network setting for Docker, which assigns Network Namespace to each container, sets IP, and so on, and connects the Docker container on a host to a virtual bridge.

All containers on the same host can communicate with each other under the same network segment.

Example:

Docker run-it-- net=bridge centos / bin/bash

Compared with the host bridge, it is found that the network card in the container and the host bridge are in the same network segment.

Communicate with the outside world through the bridge

5. Overlay neetwork mode (communication across hosts)

When the container communicates between two hosts, it uses the overlay network network mode to communicate. If host is also used to communicate across hosts, the physical ip address can be used to communicate directly. Overlay, it will virtualize a network, in this overlay network mode, there is an address similar to the service gateway, and then forward the packet to the address of the physical server, and finally route and exchange to the ip address of another server.

How is overlay implemented in the docker container?

Example:

Host: 172.16.1.56172.16.1.57 modified

Vim / lib/systemd/system/docker.service

Modify startup parameters

-- cluster-store specifies the address of the consul.

-- cluster-advertise tells consul his connection address.

Restart docker after that

Systemctl daemon-reload

Systemctl restart docker.service

Using the docker deployment Services Discovery tool consul

Docker run-d-p 8400 server 8400-p 8500 server 8500-p 8600:53/udp-h consul progrium/consul-bootstrap-ui-dir / ui

Http://172.16.1.57:8500/

You can see services that already exist

View the current network mode

Create an overlay network

-d overlay: specifies that driver is overlay

Notice that the scope of ov_net1 is global

Check the network mode in another machine, silly girl.

It is found that there are also ov_net1 networks.

View network details

Create a container

Docker run-it-- net=ov_net1 centos / bin/bash

It is found that the containers of two different hosts can communicate with ping

Restore netns Namespace

Execute the following command to get the container process number

Docker inspect adaea943f075 | grep Pid

Created if the ln-s / proc/26398/ns/net / var/run/netns/adaea943f075 (container id) # netns directory does not exist

View cyberspace

Restore docker Container netns

Ln-s / var/run/docker/netns / var/run/netns

View the network of the specified network namespace

Ip netns exec 1-46663fb66b ip addr

View the network card

Ip netns exec 1-46663fb66b brctl show

Implementation steps

From the perspective of this communication process, the steps in the cross-host communication process are as follows:

The network namespace of the container is connected with the network namespace of the overlay network through a pair of veth pair. When the container communicates with the outside world, the veth pair acts as a network cable and sends traffic to the network namespace of the overlay network.

The veth pair opposite end eth3 of the container is bridged with the vxlan device through the Linux bridge of br0, and the br0 acts as a virtual machine switch on the same host. If the target address is on the same host, it communicates directly, and if no longer, it communicates across hosts by setting the vxlan device in vxlan1.

When the vxlan1 device is created, the docker daemon assigns it a vxlan tunnel ID, which acts as a network isolation.

The docker host cluster stores and shares data through key/value, and learns which containers are running on each host through gossip protocol on port 7946. The daemon generates a static MAC forwarding table on the vxlan1 device based on this data.

According to the setting of the static MAC forwarding table, the traffic is forwarded to the network card of the host through UDP port 4789.

Based on the vxlan tunnel ID in the traffic packet, the traffic is forwarded to the network namespace of the overlay network of the opposite host.

The br0 bridge in the network namespace of the host overlay network acts as a virtual switch, forwarding traffic to the corresponding container according to the MAC address.

Although the above network communication model can realize the cross-host communication of the container, there are still some defects, which cause inconvenience in practical use, such as:

Because the vxlan network and the host network are no longer in the same network environment by default, in order to solve the communication problem between the host and the container, docker adds an additional network card eth2 to the container in the overlay network as a communication channel between the host and the container. In this way, when using container service, we must choose different network card addresses according to the nature of access, resulting in inconvenience in use.

The container exposure service can only use port binding, and the outside world cannot simply use the container IP to access the container service.

Judging from the above communication process, the native overlay network communication must rely on docker daemon and key/value storage to achieve network communication. There are many constraints, and the container may not be able to communicate across hosts for a period of time after startup, which is unreliable for some sensitive applications.

How to communicate with containers without access to overlay network cyberspace

Use the connection network

Docker network connect ov_net1 containId (container name or ID) (the container is located in another cyberspace)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report