Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Skills of using Flannel in Docker Cluster

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

When you compose multiple server nodes into a Docker cluster, you need to set up the cluster network, otherwise, by default, it cannot be interconnected across host containers. Let's first analyze the reasons.

Interconnection across host containers

The following figure describes a simple cluster network. In this cluster, there are two servers An and B, and each server has two network cards connected to the public network and the private network, respectively. The two servers can be connected through the private network. Docker is installed on the two server nodes and A/B/C/D 4 containers are running.

There is a docker0 bridge on each server node, which is a virtual device initialized after docker startup, each container is connected to the docker0 bridge, and the IP of the container is automatically assigned by docker.

However, this default network setting does not support container interconnection across hosts for two reasons.

First, access the container across hosts. There is no valid route.

For example, container A wants to access container D, and the requested address is 192.168.1.4, but host A does not know which network device to send this IP to, and host A does not know that there is a container D inside host B.

Second, the container network segment conflicts on multiple nodes

By default, when initializing the docker0 bridge after docker starts, an IP segment is randomly assigned. If it is not coordinated, the container network in multiple nodes may conflict. For example, in the figure above, both networks use 192.168.1.1 Universe 24 network segment. In this case, container IP conflicts, such as B and C, will occur.

Then, only need to solve these two problems, we can achieve cross-host container interconnection.

Pulse cloud cluster network setting

Cluster network setup can be easily accomplished using pulsed clouds. When adding a cluster, you only need to set the network type of the cluster to Flannel.

Flannel is a software dedicated to container network interconnection. Pulse Cloud will automatically deploy Flannel on your server node to implement container interconnection.

When you set up Flannel, you can specify the container LAN segment and subnet mask. As shown in the figure above, if you select the LAN segment 172.16.0.0amp 12 subnet mask 255.255.240.0, then 256 subnets can be assigned in the entire cluster network, and the IP segment is 172.16.0.0and20,172.16.32.0x20, and so on, and 4096 more IP can be allocated in each subnet. The docker0 bridge of each node uses a subnet, and each container uses an IP within a subnet, so we can make up the network shown in the following figure.

In the figure, the docker of host An is assigned to the 172.16.0.1 Master20 subnet, and the docker of the host is assigned to the 172.16.16.1Master20 subnet. Both subnets are in a virtual network managed by Flannel, 172.16.0.0pac12, represented by dotted lines in the figure.

At this point, under the coordination of Flannel, the Docker subnet IP on each host will no longer conflict. In addition, Flannel will maintain the routing rules of the container network, and Container A can access Container D through 172.16.16.3, thus realizing the interconnection of containers across hosts.

The container network maintained by Flannel is a virtual network, and the dotted line in the diagram is also for abstract understanding. If you are interested in the implementation of Flannel, you can continue to consult the official documentation of Flannel.

Some instructions

In order to simplify and facilitate understanding, the bridge IP and the subnet IP segment are not described separately. In the figure above, the subnet segment assigned by Host An is 172.16.0.0U20, and the first IP in the segment 172.16.0.1 is used as the IP of the bridge device.

Because the first IP in a network segment is used as the bridge device IP and the last IP is used as the broadcast IP, 4096 IP can theoretically be allocated in a subnet, but in practice only 4094 IP are available.

When setting up a pulse cluster network, the selected cluster network segment should not conflict with the existing network. For example, if the 10.0.0.0Comp8 network already exists in the target cluster, then select 172.16.0.0Comp12 or 192.168.0.0Uniple 16 as the container network.

Networking IP

In the schematic diagram of the Flannel network above, there are three networks, the public network 0.0.0.0max 0, the private network 10.0.0.0amp 8 and the virtual container network 172.16.0.0Univer 12. It is emphasized that the container network is a virtual network because the data on this network must be carried by other networks, which is a secondary network.

For example, container An on host A sends data to container D on host B, and the data will be routed to the docker0 bridge, and then the data will be sent by Flannel through the real network card of host A to the network card of host B. the Flannel running on host B will continue to forward the data to the docker0 bridge of host B, and finally reach container D.

So if the host has multiple network cards, as shown in the figure, two network cards are connected to the public network and the private network respectively, then we need to specify a network card / IP for the Flannel to send data. This IP is called networking IP. That is, tell the Flannel running on host A, which network card / IP to use to find host B.

Clusters built using pulse cloud will use the public network IP of the node as the networking IP by default. In that case, the data communication between multiple nodes will be sent to the public network, unless it is interconnected across the computer room, in general, we want the nodes to transmit data through the internal network to improve performance or reduce costs.

After adding hosts to the cluster, on the Host Settings page, select networking IP to specify the networking IP used by each host node respectively.

Cluster behind NAT Devic

NAT, that is, network address translation, commonly used routers are NAT devices. In the network topology with NAT devices, the hosts in the LAN have only private network IP, but no public network IP. The network is as follows:

In this network model, the hosts of each server node connect to the pulse cloud through router 8.8.8.8, so the pulse cloud can only obtain the public network IP of each server is 8.8.8.8.8. As mentioned above, the pulse cloud will default to use the public network IP 8.8.8.8 as the networking IP of the Flannel, in this case, it will cause the Flannel networking to fail, and even the Flannel will not be able to start. Because there is not a network card with an IP of 8.8.8.8 on the host.

In order to solve this problem, it is only necessary to manually set the networking IP of each node.

The hosts of some cloud service providers are also behind the NAT devices. For example, if Aliyun's VPC network is used, even if the server is bound to the public network IP 8.8.8.8, there is no public network IP card device bound to the host. There is only one private network Nic. The reason is that there is a NAT device. In this case, you also need to specify the private network IP as the networking IP.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report