In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to understand CNI and CNI plug-ins in K8s. The content of the article is of high quality, so Xiaobian shares it with you for reference. I hope you have a certain understanding of relevant knowledge after reading this article.
Network architecture is one of the more complex aspects of K8s. The K8s network model itself has certain requirements for certain network functions, so there are many network solutions in the industry to meet specific environments and requirements. CNI stands for Container Network API, designed to make it easier for users to configure container networks when containers are created or destroyed. In this article, the author will lead you to understand how typical network plug-ins work and master the use of CNI plug-ins.
What is CNI?
First of all, let's introduce what CNI is. Its full name is Container Network Interface, which is the API interface of container network.
It is a standard interface in K8s that calls network implementations. Kubelet uses this standard API to call different network plug-ins to implement different network configuration methods. The CNI plug-in that implements this interface implements a series of CNI API interfaces. Common CNI plugins include Calico, flannel, Terway, Weave Net, and Contiv.
How to use CNI in Kubernetes
K8s uses CNI profiles to decide what CNI to use.
The basic usage is:
First, configure CNI configuration file (/etc/cni/net.d/xxnet.conf) on each node, where xxnet.conf is the name of a network configuration file;
Install binary plug-ins corresponding to CNI configuration files;
After creating a Pod on this node, Kubelet executes the CNI plug-ins installed in the previous two steps according to the CNI configuration file;
After the previous step, the Pod network configuration is complete.
The specific process is shown in the following figure:
When creating a Pod in a cluster, the Pod configuration is first written via apiserver. Some of the apiserver's control components (such as Scheduler) are dispatched to a specific node. After Kubelet hears the creation of this Pod, it will perform some creation operations locally. When executing the step of creating the network, it will first read the configuration file in the configuration directory we just mentioned, which will declare which plug-in is used, and then execute the specific CNI plug-in binary file, and then the CNI plug-in enters the Pod network space to configure the Pod network. Once configured, Kuberlet completes the entire pod creation process and the Pod is online.
You might think that the above process has many steps (such as configuring CNI configuration files, installing binary plug-ins, etc.) and seems complicated.
But it's easier if we just use CNI plugins as a user, because many CNI plugins already provide one-click installation capabilities. Take our commonly used Flannel as an example, as shown in the following figure: We only need to use a Deployment template of kubectl apply Flannel, which can automatically install configuration and binary files on each node.
After installation, the CNI plug-in for the entire cluster is installed.
Therefore, if we only use CNI plugins, then in fact, many CNI plugins already provide one-click installation scripts, and we don't need to care about how Kubernetes is configured internally and how to call APIs.
Which CNI plug-in is right for me?
There are many CNI plugins in the community, such as Calico, flannel, Terway, etc. So which CNI plug-in should we choose in a real production environment?
This will start with several implementation modes of CNI. We need to choose different implementation modes according to different scenarios, and then select a specific plug-in.
Generally speaking, CNI plug-ins can be divided into three types: Overlay, Routing and Underlay.
Overlay mode is typically characterized by a container independent of host IP segments, which communicate across host networks by creating tunnels between hosts, encapsulating packets from the entire container segment into packets between hosts in the underlying physical network. The advantage of this approach is that it does not depend on the underlying network;
In routing mode, hosts and containers also belong to different network segments. The main difference between it and Overlay mode is that its cross-host communication is through routing, and there is no need to make a tunnel packet between different hosts. However, routing needs to depend partly on the underlying network, for example, the underlying network is required to have a capability that can be reached by Layer 2;
In Underlay mode, containers and hosts are located at the same layer of the network, and both have the same status. The network between containers mainly depends on the underlying network. The pattern is therefore strongly dependent on underlying capabilities.
After understanding the above three commonly used implementation modes, judge which mode can be implemented according to your own environment and requirements, and then find CNI plug-ins in the corresponding modes. But there are so many plug-ins in the community, what kind of model do they belong to? How do you choose? How to choose the right one for you? We can consider it from the following three aspects.
1. environmental restrictions
The underlying capabilities supported in different environments are different.
There are many network restrictions in virtualization environments (such as OpenStack), such as not allowing direct access between machines through Layer 2 protocols, requiring Layer 3 forwarding with IP addresses, restricting a certain machine to use only certain IPs, etc. In such a heavily restricted underlying network, only Overlay plug-ins can be selected, such as Flannel-vxlan, Calico-ipip, Weave, etc.;
There are fewer restrictions on the underlying network in the physical machine environment, for example, we directly do a layer 2 communication under the same switch. For this clustered environment, we can choose either Underlay or a routing pattern plug-in. Underlay means that we can plug multiple network cards directly into a physical machine or do hardware virtualization on some network cards; routing mode is to rely on Linux routing protocols to do a get through. This avoids the performance degradation associated with vxlan encapsulation. Our optional plugins for this environment include clico-bgp, flannel-hostgw, sriov, etc.
Public cloud environments are also virtualized, so there are more restrictions at the bottom. However, each public cloud will consider adapting containers to improve the performance of containers, so each public cloud may provide some APIs to configure some additional network cards or routing capabilities. On the public cloud, we try to choose CNI plug-ins provided by public cloud vendors to achieve the best compatibility and performance. Aliyun, for example, provides a high-performance Terway plugin.
After considering the environmental constraints, we should all have some choices in mind, knowing which ones can be used and which ones cannot be used. On this basis, we will consider the functional requirements.
2. functional requirements
First, security requirements;
K8s supports NetworkPolicy, which means that we can support policies such as "whether Pods can access each other" through some rules of NetworkPolicy. But not every CNI plugin supports NetworkPolicy declarations. If you have this requirement, you can choose some plugins that support NetworkPolicy, such as Calico, Weave, etc.
The second is whether resources outside the cluster need to interconnect with resources within the cluster;
Everyone's applications are initially on virtual machines or physical machines. After containerization, applications cannot be migrated at once, so traditional virtual machines or physical functions need to communicate with the IP addresses of containers. In order to achieve this interoperability, there needs to be some way to get through between the two or directly on the same layer. At this time, you can choose Underlay network, such as sriov, which is the same layer as the previous virtual machine or physical machine. We can also use calico-bgp, at this time although they are not in the same network segment, but through it to do some BGP routing with the original router a publication, so that you can also get through the virtual machine and container.
The last consideration is the K8s 'ability to discover services and Load Balancer.
Service discovery and Load Balancer of K8s are the Service of K8s we introduced earlier, but not all CNI plug-ins can realize these two capabilities. For example, many plug-ins in Underlay mode, the NIC in the Pod is directly used by Underlay hardware, or inserted into the container through hardware virtualization, at this time its traffic cannot go to the namespace where the host is located, so it cannot apply the rules configured by kube-proxy on the host.
In this case, the plug-in cannot access service discovery to K8s. Therefore, if you need service discovery and Load Balancer, you need to pay attention to whether they support these two capabilities when selecting Underlay plugins.
After filtering through functional requirements, there are very few plug-ins to choose from. After filtering through environmental constraints and functional requirements, if there are still 3 or 4 plug-ins left, you can consider performance requirements again.
3. performance requirements
We can measure the performance of different plug-ins from the speed of pod creation and the network performance of pods.
Pod creation speed
When we create a group of Pods, for example, when the traffic peak comes, we need to expand urgently. In this case, for example, if we expand 1000 Pods, we need CNI plug-in to create and configure 1000 network resources. Overlay and routing patterns are created very quickly in this case, because they are virtualized inside the machine, so they can be done only by calling kernel interfaces. However, for Underlay mode, the creation speed of the whole Pod is relatively slow due to the need to create some underlying network resources. Therefore, for those scenarios that often require emergency expansion or the creation of large quantities of Pods, we should try to choose Overlay or routing mode network plug-ins.
Pod network performance
It is mainly reflected in network forwarding, network bandwidth, PPS delay and other performance indicators between two pods. Overlay mode performance is poor, because it has done a layer of virtualization on the node, but also need to de-packet, packet will bring some packet loss, CPU consumption, etc., if everyone has higher requirements for network performance, such as machine learning, big data these scenarios are not suitable for Overlay mode. In this case we usually choose Underlay or CNI plug-ins for routing patterns.
I believe that everyone through these three steps after the selection can find their own network plug-ins.
IV. How to develop your own CNI plug-in
Sometimes the plug-ins of the community cannot meet their own needs. For example, only Overlay plug-ins such as vxlan can be used on Alibaba Cloud, while Overlay plug-ins have relatively poor performance and cannot meet some business needs on Alibaba Cloud. Therefore, Alibaba Cloud has developed a Terway plug-in.
If our own environment is special and we can't find a suitable network plug-in in the community, we can develop our own CNI plug-in.
CNI plug-in implementations typically consist of two parts:
A binary CNI plug-in to configure the Pod NIC and IP address. After this step of configuration is completed, it is equivalent to plugging in a network cable to the Pod, which means that it already has its own IP and network card;
A Daemon process manages network connectivity between pods. This step is equivalent to actually connecting the pods to the network, allowing the pods to communicate with each other.
Plug the pod into the internet
So how do you do the first step, plug in the pod? This is usually a step:
1. Prepare a network card for the Pod
Usually we will use a virtual network card such as "veth", one end placed in the network space of the Pod and the other end placed in the network space of the host, so as to realize the connection between the two namespaces of the Pod and the host.
2. Assign IP addresses to pods
This IP address has a requirement, which we also mentioned when introducing the network before, that is, this IP address needs to be unique in the cluster. How to ensure that pods are assigned a unique IP address in the cluster?
Generally speaking, when we create the whole cluster, we will specify a large segment of the Pod, and assign a Node segment according to each node. For example, the right side of the figure above is a 172.16 segment, we then allocate a/24 segment according to each node, so that the addresses on each node can be guaranteed to be non-conflicting. Then each Pod is assigned specific IP addresses sequentially from the network segments on a specific node. For example, Pod1 is assigned to 172.16.0.1, and Pod2 is assigned to 172.16.0.2. In this way, there is no conflict in IP address allocation in nodes, and different nodes belong to different network segments, so there is no conflict.
This assigns the Pod a unique IP address within the cluster.
3. Configure Pod IP and Routing
Step 1: Configure the allocated IP address to the virtual NIC of the Pod.
Step 2: Configure the routing of the cluster network segment on the NIC of the Pod, so that all the access traffic goes to the corresponding NIC of the Pod, and the default routing network segment will also be configured to this NIC, that is, the traffic going through the public network will also go to this NIC for routing;
Finally, configure the route to the IP address of the Pod on the host, pointing to the virtual NIC veth2 on the host peer. This enables routing from the Pod to the host, and also enables IP addresses accessed from the host to be routed to the peer corresponding to the NIC of the corresponding Pod.
Connect the Pod to the network
Just now we plugged the pod into the Internet, that is, we assigned it an IP address and a routing table. How do you communicate between pods? This means that the IP address of each Pod can be accessed within the cluster.
Usually we do these things in the CNI Daemon process. This is usually a step:
First, CNI's Daemon process running on each node learns the IP addresses of all pods in the cluster and the information about the nodes where they are located. The way to learn is usually by listening to the K8s API Server to get the IP address and node of the existing Pod, and to notify each Daemon when new nodes and new pods are created.
After getting the relevant information of Pod and Node, configure the network to get through.
First Daemon creates tunnels to all nodes of the cluster. The tunnel here is an abstract concept, and the specific implementation is generally completed through Overlay tunnel, VPC routing table on Alibaba Cloud, or BGP routing in one's own computer room;
The second step is to associate the IP addresses of all pods with the channels created in the previous step. Association is also an abstract concept, and the concrete implementation is usually done through Linux routing, fdb forwarding table or OVS flow table. Linux routing allows you to configure which node an IP address is routed to. fdb forwarding table is the abbreviation of forwarding database, which forwards the IP of a certain Pod to the tunnel endpoint of a certain node (Overlay network). The OVS flow table is implemented by Open vSwitch, which can forward the IP of the Pod to the corresponding node.
About how to understand the CNI and CNI plug-ins in K8s to share here, I hope the above content can be of some help to everyone, you can learn more knowledge. If you think the article is good, you can share it so that more people can see it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.