Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize the comparison of Kubernetes CNI Network

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

How to achieve the comparison of Kubernetes CNI networks, I believe that many inexperienced people do not know what to do. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

Jieshao

Network architecture is one of the more complex aspects of Kubernetes and a headache for many users. The Kubernetes network model itself has certain requirements for some specific network functions, but it also has some flexibility in implementation. Therefore, there are many different network solutions in the industry to meet specific environments and requirements.

CNI, which means container network interface, is a standard design to make it easier for users to configure the container network when creating or destroying the container. In this article, we will focus on exploring and comparing the most popular CNI plug-ins: Flannel, Calico, Weave, and Canal (technically a combination of multiple plug-ins). These plug-ins not only ensure that the network requirements of Kubernetes are met, but also provide Kubernetes cluster administrators with some specific network functions they need.

Background

A container network is a mechanism by which a container chooses to connect to other containers, hosts, and external networks, such as Internet. The container's runtime provides a variety of network modes, each of which produces a different experience. For example, Docker can configure the following networks for containers by default:

None: add a container to a container-specific network stack with no external connections.

Host: add containers to the host's network stack without isolation.

Default bridge: default network mode. Each container can be connected to each other through an IP address.

Custom bridge: a user-defined bridge with more flexibility, isolation, and other convenient features.

Docker also allows users to configure more advanced networks (including multi-host overlay networks) through other drivers and plug-ins.

The original intention of CNI is to create a framework for dynamically configuring appropriate network configurations and resources when configuring or destroying containers. The CNI specification in the following link outlines the plug-in interface used to configure the network, which allows the container runtime to coordinate with the plug-in:

Https://github.com/containernetworking/cni/blob/master/SPEC.md

The plug-in is responsible for configuring and managing IP addresses for the interface and typically provides functions related to IP management, IP allocation for each container, and multi-host connections. The container runtime invokes the network plug-in, allocates the IP address and configures the network when the container starts, and calls it again when the container is deleted to clean up these resources.

The runtime or coordinator determines which network the container should join and which plug-in it needs to call. The plug-in then adds the interface to the container network namespace as one side of a veth pair. It then makes changes on the host, including connecting the rest of the veth to the bridge. It then allocates the IP address and sets the route by calling a separate IPAM (IP address Management) plug-in.

In Kubernetes, kubelet can invoke the plug-ins it finds at the appropriate time to automatically configure the network for pod launched through kubelet.

Technical language

Before comparing the CNI plug-ins, we can do an overall understanding of the relevant terms that we will see in the network. Whether you are reading this article or coming into contact with other CNI-related content in the future, it is always useful to understand some common terms.

Some of the most common terms include:

Layer 2 network: the "data link" layer of the OSI (Open Systems Interconnections) network model. The layer 2 network handles the transmission of frames between two adjacent nodes on the network. A noteworthy example of a layer 2 network is Ethernet, where MAC is represented as a sublayer.

Layer 3 network: the network layer of the OSI network model. The main concern of a layer 3 network is to route packets between hosts above a layer 2 connection. IPv4, IPv6, and ICMP are examples of layer 3 network protocols.

VXLAN: stands for "virtual extensible LAN". First, VXLAN is used to help achieve large cloud deployments by encapsulating layer 2 Ethernet frames in UDP datagrams. VXLAN virtualization is similar to VLAN, but provides greater flexibility and functionality (VLAN is limited to 4096 network ID). VXLAN is an encapsulation and overlay protocol that runs on existing networks.

Overlay network: Overlay network is a virtual logical network based on the existing network. Overlay networks are often used to provide useful abstractions on top of existing networks and to separate and protect different logical networks.

Encapsulation: encapsulation refers to the process of encapsulating network packets in an additional layer to provide other context and information. In overlay networks, encapsulation is used to convert from the virtual network to the underlying address space so that it can be routed to different locations (packets can be de-encapsulated and continue to their destination).

Mesh network: a mesh network (Mesh network) is a network in which each node is connected to many other nodes for collaborative routing and greater connectivity. Mesh networks allow routing through multiple paths, providing a more reliable network. The disadvantage of mesh is that each additional node adds a lot of overhead.

BGP: stands for Border Gateway Protocol and is used to manage the routing of packets between edge routers. BGP helps figure out how to send packets from one network to another by considering available paths, routing rules, and specific network policies. BGP is sometimes used as a routing mechanism in CNI plug-ins rather than an encapsulated overlay network.

Now that you understand the technical terminology and the various technologies that support various plug-ins, let's explore some of the most popular CNI plug-ins.

CNI comparison

Flannel

Link: https://github.com/coreos/flannel

The project Flannel, developed by CoreOS, is probably the most direct and popular CNI plug-in. It is one of the most mature examples of network architecture in container orchestration system, which aims to achieve better inter-container and inter-host networks. With the rise of the CNI concept, the Flannel CNI plug-in is an early starter.

Compared with other scenarios, Flannel is relatively easy to install and configure. It is packaged as a single binary flanneld, and many common Kubernetes cluster deployment tools and many Kubernetes distributions can install Flannel by default. Flannel can use the existing etcd cluster of Kubernetes clusters to use API to store its state information, so there is no need for a dedicated data store.

Flannel configures a layer 3 IPv4 overlay network. It creates a large internal network that spans every node in the cluster. In this overlay network, each node has a subnet that is used to assign IP addresses internally. When you configure pod, the Docker bridge interface on each node assigns an address to each new container. Pod on the same host can communicate using Docker bridging, while pod on different hosts uses flanneld to encapsulate their traffic in UDP packets to route to the appropriate destination.

Flannel has several different types of backends available for encapsulation and routing. The default and recommended method is to use VXLAN because VXLAN performs better and requires less manual intervention.

Overall, Flannel is a good choice for most users. From an administrative point of view, it provides a simple network model, and users only need some basic knowledge to set up an environment suitable for most use cases. Generally speaking, using Flannel at an early stage is a safe choice until you start to need something it can't provide.

Calico

Link: https://github.com/projectcalico/cni-plugin

Calico is another popular network choice in the Kubernetes ecosystem. Although Flannel is recognized as the easiest choice, Calico is known for its performance and flexibility. Calico has more comprehensive functions, providing not only the network connection between the host and the pod, but also network security and management. The Calico CNI plug-in encapsulates the functionality of Calico within the CNI framework.

On a newly configured Kubernetes cluster that meets the system requirements, users can quickly deploy Calico by applying a single manifest file. If you are interested in the optional network policy features of Calico, you can apply other manifest to the cluster to enable these features.

Although the operation required to deploy Calico looks fairly simple, it creates a network environment with both simple and complex properties. Unlike Flannel, Calico does not use overlay networks. Instead, Calico configures a layer 3 network that uses the BGP routing protocol to route packets between hosts. This means that packets do not need to be wrapped in an additional encapsulation layer when moving between hosts. The BGP routing mechanism guides packets locally without the need for additional packaging of traffic in the traffic layer.

In addition to the performance benefits, users can use more conventional methods to troubleshoot when network problems occur. Although encapsulation using technologies such as VXLAN is also a good solution, the way packets are processed in this process is difficult to track in the same field. With Calico, standard debugging tools can access the same information as in a simple environment, making it easier for more developers and administrators to understand behavior.

In addition to Internet connectivity, Calico is also known for its advanced network functions. Network strategy is one of its most sought after functions. In addition, Calico can be integrated with the service grid Istio to interpret and enforce policies for workloads within the cluster in the service grid layer and the network infrastructure layer. This means that users can configure strong rules that describe how pod should send and receive traffic, improve security, and control the network environment.

If supporting network strategies is important for your environment, and you have requirements for other performance and functionality, then Calico is an ideal choice. In addition, Calico provides commercial support if you may want technical support now or in the future. In general, Calico is a good choice when you want to be able to control the network for a long time, rather than just configure it once and forget it.

Canal

Link: https://github.com/projectcalico/canal

Canal is also an interesting choice for a number of reasons.

First, Canal is the name of a project that attempts to integrate the network layer provided by Flannel with the network policy capabilities of Calico. However, when the contributors completed the details, it became clear that integration would not be necessary if the standardization and flexibility of the Flannel and Calico projects were ensured respectively. As a result, the official project became a bit "unfinished", but achieved the expected ability to deploy the two technologies together. For this reason, even if the project no longer exists, the industry will habitually refer to the composition of Flannel and Calico as "Canal".

Because Canal is a combination of Flannel and Calico, its advantage also lies in the intersection of the two technologies. The network layer uses the simple overlay provided by Flannel and can run in many different deployment environments without additional configuration. In terms of network policy, Calico's powerful network rule evaluation provides more supplements for the basic network, thus providing more security and control.

After ensuring that the cluster meets the necessary system requirements (https://docs.projectcalico.org/v3.6/getting-started/kubernetes/requirements), users need to apply two manifest to deploy Canal, which makes its configuration more difficult than any individual project. If the IT team of an enterprise plans to change their network plan and wants to do some experiments and gain some experience on the network strategy before implementing the change, then Canal is a good choice.

In general, if you like the network model provided by Flannel, but find some of the features of Calico attractive, you might as well try Canal. From a security perspective, the ability to define network policy rules is a huge advantage and in many ways a killer function of Calico. Being able to apply this technology to the familiar network layer means that you can get a more powerful environment and save most of the transition process.

Weave

Link: https://www.weave.works/oss/net/

Weave is a Kubernetes CNI network option provided by Weaveworks, and the model it provides is different from all the network scenarios we have discussed so far. Weave creates a mesh overlay network between each node in the cluster, allowing flexible routing between participants. This feature, combined with other unique features, allows Weave to route intelligently in certain situations that may cause problems.

To create a network, Weave relies on the routing components installed on each host in the network. These routers then exchange topology information to maintain an up-to-date view of the available network environment. When traffic needs to be sent to pod on different nodes, the Weave routing component automatically decides whether to send it over the "fast data path" or fall back to the "sleeve" packet forwarding method.

Fast data paths rely on the kernel's native Open vSwitch data path module to forward packets to the appropriate pod without having to move in and out of user space multiple times. The Weave router updates the Open vSwitch configuration to ensure that the kernel layer has accurate information about how to route incoming packets. Conversely, sleeve mode can be used as a backup when the network topology is not suitable for fast data path routing. It is a slow encapsulation mode that can route packets when the fast data path lacks the necessary routing information or connections. As traffic passes through the router, they learn which peers are associated with which MAC addresses, allowing them to route subsequent traffic more intelligently with fewer hops. When a network change causes a change in available routes, this same mechanism can help each node to correct itself.

Like Calico, Weave also provides network policy capabilities for Kubernetes clusters. When you set up Weave, network policies are automatically installed and configured, so there is no need for users to configure other than to add network rules. A unique function of Weave, which is not available in any other network scheme, is the simple encryption of the entire network. Although this adds considerable network overhead, Weave can use NaCl encryption (http://nacl.cr.yp.to) to automatically encrypt all routed traffic for sleeve traffic, and for fast data path traffic, because it needs to encrypt VXLAN traffic in the kernel, Weave uses IPsec ESP to encrypt fast data path traffic.

Weave is a good choice for those who are looking for a feature-rich network without adding a lot of complexity or administrative difficulty. It is relatively easy to set up, provides many built-in and auto-configured features, and provides intelligent routing in scenarios where other solutions may fail. The mesh topology does limit the size of the network that can fit reasonably, but for most users, this is not a big problem. In addition, Weave also provides paid technical support, which can provide technical services such as troubleshooting for enterprise users.

The CNI standard adopted by Kubernetes allows network solutions in the Kubernetes ecosystem to blossom. A greater variety of options means that most users will be able to find CNI plug-ins that suit their current needs and deployment environment, as well as new solutions when the environment changes. The operational requirements of different enterprises vary greatly, so having a series of mature solutions with different complexity and feature richness will greatly help Kubernetes to provide a consistent user experience while meeting the unique needs of different users.

After reading the above, have you mastered the method of how to compare Kubernetes CNI networks? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report