Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How Flannel-UDP works in kubernetes

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

It is believed that many inexperienced people don't know what to do about how Flannel-UDP works in kubernetes. Therefore, this article summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

Kubernetes is an excellent orchestration tool for managing containerized applications on a large scale. However, as you probably know, using kubernetes is not easy, especially the back-end network implementation. I encountered a lot of problems in the Internet, and it took me a lot of time to figure out how it works.

I want to take the simplest implementation as an example to explain the network work of kubernetes.

Kubernetes network model

The following figure shows a simple image of a kubernetes cluster:

Pod in kubernetes

Kubernetes manages a cluster of Linux machines (possibly cloud VM or physical servers such as ECS). On each host, kubernetes runs any number of Pod, and there can be any number of containers in each Pod. The user's application is running in one of these containers.

For kubernetes,Pod, it is the smallest snap-in, and all containers in a Pod share the same network namespace, which means they have the same network interface and can connect to each other using * localhost*

According to the official document, the kubernetes network model requires:

All containers can communicate with all other containers without NAT, and all nodes can communicate with all containers (and vice versa) without the need for NAT containers to see the same IP as IP others see

We can replace all containers with Pod as required above, because containers are shared with the Pod network.

Basically, this means that all Pod should be able to communicate freely with other Pod in the cluster, even if they are on different hosts, and they identify each other using their own IP addresses, as if the underlying host does not exist. In addition, the host should be able to communicate with any Pod using its own IP address without any address translation.

Kubernetes does not provide any default network implementation, but only defines the model and is implemented by other tools. There are many implementations today, and Flannel is one of them, and one of the simplest. In the following sections, I will explain the UDP pattern implementation of Flannel.

The Overlay Network

Flannel is created by CoreOS for use in Kubernetes networks and can also be used as a general software-defined networking solution for other purposes.

To meet the network requirements of kubernetes, flannel's idea is simple: create another flat network that runs on top of the host network, which is called the overlay network overlay network. In this overlay network, all containers (Pod) are assigned an IP address, and they communicate with each other by directly calling each other's IP address.

To help explain, I used a small test kubernetes cluster on AWS with three Kubernetes nodes. The network is as follows:

Flannel network

There are three networks in this cluster:

AWS VPC network: all instances are in one VPC subnet, 172.20.32.0lap19. They have been assigned ip addresses in this range, and all hosts can connect to each other because they are in the same LAN.

Flannel overlay network:flannel created another network, 100.96.0.0 kubernetes 16, which is a larger network that can hold 65536 addresses and spread across all Pod nodes, within which an address will be assigned to each Pod, and we will see how flannel does this later.

Intra-host docker network: inside each host, flannel assigns a network 100.96.x.0/24 to all pod in that host, which can hold 256addresses. The docker bridge interface docker0 will use this network to create a new container.

With this design, each container has its own IP address, which belongs to the overlay subnet 100.96.0.0exc16. Containers within the same host can communicate with each other through the docker bridge network interface Docker0, which is simple, so I'll skip it in this article. In order to communicate across hosts with other containers in the overlay network on the host, flannel uses kernel routing tables and UDP encapsulation to implement this function, as described in the following sections.

Communication across host containers

Suppose the container in node 1 with an IP address (we call it container 1) 100.96.1.2 wants to connect to the container in node 2 (we call it container 2) 100.96.2.3 using the IP address, let's take a look at how the overlay network enables packet passage.

Cross-host communication

The first container-1 uses to create an IP packet src: 100.96.1.2-> dst: 100.96.2.3, which will enter the docker0 bridge as a gateway to the container.

In each host, flannel runs a daemon called flanneld, which creates routing rules in the routing table of the kernel, which is what the routing table of node 1 looks like:

Admin@ip-172-20-33-102$ ip route

Default via 172.20.32.1 dev eth0

100.96.0.0/16 dev flannel0 proto kernel scope link src 100.96.1.0

100.96.1.0/24 dev docker0 proto kernel scope link src 100.96.1.1

172.20.32.0/19 dev eth0 proto kernel scope link src 172.20.33.102

As we can see, the destination address of the packet is 100.96.2.3 in the larger overlay network, so it matches the second rule, and now the kernel knows that the packet should be sent to flannel0.

Flannel0TUN is a TUN device created by our flanneld daemon, and TUN is a software interface implemented in the Linux kernel that can pass raw ip packets between the user program and the kernel. It works in two directions:

When an IP packet is written to a flannel0 device, the packet is sent directly to the kernel, and the kernel routes the packet according to its routing table. When the IP packet reaches the kernel and the routing table says it should be routed to the flannel0 device, the kernel sends the packet directly to the flanneld process that created the device, which is the daemon.

When the kernel sends the packet to the TUN device, it goes directly into the flanneld process and sees that the destination address is 100.96.2.3. Although you can see from the figure that the address belongs to a container running on Node 2, how does flanneld know?

Flannel happens to store some information in a key-value storage service called etcd, and you shouldn't be surprised if you know kubernetes. In flannel, we can think of it as a regular key-value store.

Flannel stores subnet mapping information in the etcd service, which we can view using the following etcdctl command:

Admin@ip-172-20-33-102 coreos.com/network/subnets $etcdctl ls

/ coreos.com/network/subnets/100.96.1.0-24

/ coreos.com/network/subnets/100.96.2.0-24

/ coreos.com/network/subnets/100.96.3.0-24

Admin@ip-172-20-33-102 coreos.com/network/subnets/100.96.2.0$ etcdctl get / coreos.com/network/subnets/100.96.2.0-24

{"PublicIP": "172.20.54.98"}

Therefore, each flanneld process queries etcd to know which host each subnet belongs to and compares the destination ip address with all subnet keys stored in etcd. In this example, the address 100.96.2.3 will match the subnet 100.96.2.0-24, and as we can see, the value stored in this key indicates that Node ip is 172.20.54.98.

Now that flanneld knows the destination address, the original IP packet is wrapped in an UDP packet with its own host ip as the source address and the destination host's IP as the destination address. In each host, the flanneld process will listen on the default UDP port: 8285. Therefore, you only need to set the destination port of the UDP packet to 8285 and send it over the network.

After the UDP packet arrives at the target host, the kernel's IP stack sends the packet to the flanneld process, because that's the user process listening on the UDP port: 8285. Flanneld then gets the payload of the UDP packet, which is the original IP packet generated by the original container, which is simply written to the TUN device flannel0, and then passed directly to the kernel, which is how TUN works.

Like node 1, the routing table will determine the destination of this packet. Let's take a look at the routing table of node 2:

Admin@ip-172-20-54-98$ ip route

Default via 172.20.32.1 dev eth0

100.96.0.0/16 dev flannel0 proto kernel scope link src 100.96.2.0

100.96.2.0/24 dev docker0 proto kernel scope link src 100.96.2.1

172.20.32.0/19 dev eth0 proto kernel scope link src 172.20.54.98

The destination address of the IP packet is 100.96.2.3, and the kernel will use the most accurate match, which is the third rule. The packet is sent to the docker0 device. Just like the docker0 bridge device, all containers in this host are connected to the bridge, and container 2, the final destination, will see and receive the packet.

In the end, our packet completes a way of delivering to the destination, and when contianer-2 sends the packet back to Container 1, the reverse routing will work in exactly the same way. This is how communication across host containers works.

Use the Docker network for configuration

In the above explanation, we have omitted one point. This is how we configure docker to use a smaller subnet, 100.96.x.0/24?

Coincidentally, flanneld writes its subnet information to a file in the host:

Admin@ip-172-20-33-102 run/flannel/subnet.env $cat

FLANNEL_NETWORK=100.96.0.0/16

FLANNEL_SUBNET=100.96.1.1/24

FLANNEL_MTU=8973

FLANNEL_IPMASQ=true

This information will be used to configure the options for the docker daemon, so docker can use FLANNEL_SUBNET as its bridging network, and then the host container network will work:

Dockerd-- bip = $FLANNEL_SUBNET-- mtu = $FLANNEL_MTU

Packet replication and performanc

Newer versions of flannel do not recommend UDP encapsulation for production, indicating that it should only be used for debugging and testing purposes. One of the reasons is performance.

Although flannel0TUN devices provide an easy way to get and send packets through the kernel, it degrades performance: packets must be copied back and forth from user space to kernel space:

Packet replication

As mentioned above packets must be sent from the original container process and then copied three times between user space and kernel space which will significantly increase network overhead so UDP should be avoided in production if possible.

Flannel is one of the simplest implementations of the kubernetes network model. It uses existing Docker networks and additional Tun devices with daemons for UDP encapsulation. I explained the details of the core section: communication across host containers and briefly mentioned performance loss.

After reading the above, have you mastered how Flannel-UDP works in kubernetes? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report