In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "how to use Kubernetes network". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Many Kubernetes deployment guides include instructions for deploying Kubernetes network CNI in a K8S deployment. But if your K8S cluster is already running and no network has been deployed, deploying the network is as simple as running the configuration file it provides on K8S (for most networks and basic use cases). For example, to deploy flannel:
In this way, from a network point of view, K8S can already be used. To test whether everything is all right, we created two Pod.
This will create two pod that are using our drives. Looking at one of the containers, we find that the IP address range of the network is 10.42.0.0swap 24.
A fast ping test on another Pod shows that the network is operating normally.
Compared with the Docker network, how does the Kubernetes network work?
Kubernetes manages the network through CNI on top of Docker and attaches devices to Docker. Although Docker with Docker Swarm also has its own networking capabilities (such as overlay, macvlan, bridging, etc.), CNI also provides a similar type of functionality.
It is also important that instead of using docker0 (which is the default bridge for Docker), K8S creates its own bridge, called cbr0, which needs to be distinguished from docker0.
Why do we need Overlay networks?
Overlay networks such as vxlan or ipsec can encapsulate a packet into another packet. This makes the entity still addressable outside the scope of another computer. Alternatives to Overlay networks include L3 solutions such as macvtap (lan) and even L2 solutions such as ivtap (lan), but these solutions have some limitations.
Any solution on L2 or L3 can allow pod to address on the network. This means that pod can be accessed not only within the Docker network, but also directly from outside the Docker network. These are public IP addresses or private IP addresses.
However, communicating on the L2 is troublesome, and your experience will vary from network device to network device. Some switches need some time to register your Mac address before they can actually connect to the rest of the network. You may also have some trouble because the neighbor (ARP) table of other hosts in the system is still running on outdated caches and always needs to run with dhcp instead of host-local, which avoids ip conflicts between hosts. Mac address and neighbor table problems are the reasons for solutions such as ipvlan. Instead of registering new mac addresses, these solutions route traffic on existing addresses (although they have their own problems).
Therefore, my recommendation is that for most users, the overlay network as the default solution should be sufficient. However, once the workload becomes more advanced and makes more specific requirements, you will need to consider other solutions, such as BGP and direct routing.
How does the Kubernetes network work?
The first thing to understand in Kubernetes is that pod is not actually equivalent to a container, but rather a collection of containers. Share a network stack in a container of the same collection. Kubernetes is managed by setting up a network on the pause container, which you can find in each pod you create. All other pod are connected to the network of the paused container, which itself does nothing but provide the network. Therefore, it is also possible to make a container communicate with services in different containers through localhost, where the container has the same definition of the same pod.
Except for local communication, communication between pod appears to be almost the same as container-to-container traffic in Docker networks.
Kubernetes traffic routing
I'll use two scenarios as examples to explain in detail how to route traffic between Pod.
1. Route traffic on the same host:
In both cases, traffic does not leave the host. One is when the invoked service is running on the same node, and the other is the same collection of containers in a single pod.
If localhost:80 is invoked from Container 1 in the first pod and the service is run in Container 2, the traffic will pass through the network device and forward the packet to another destination. In this case, the route for routing traffic is very short.
It will take longer if we want to communicate with other pod. First, the traffic will be passed to cbr0, and then cbr0 will notice that we are communicating on the same subnet, so it will forward the traffic to the destination Pod, as shown in the following figure:
2. Route traffic across hosts:
When we leave the node, this becomes more complex. Cbr0 now passes traffic to the next node, whose configuration is managed by CNI. These are basically subnet routes with the target host as the gateway. The target host can then continue to use its own cbr0 and forward traffic to the destination container, as follows:
What on earth is CNI?
CNI is the abbreviation of Container Networking Interface (Container Network Interface). It is basically a well-defined external interface that Kubernetes can call to provide network functions.
You can find the maintained reference plug-ins in the following link, including most of the important plug-ins in the official repo of the container network:
Https://github.com/containernetworking/plugins
CNI version 3.1 is not very complicated. It contains three necessary functions, ADD, DEL, and VERSION, that manage the network as much as it can. You can read the specification here for a more detailed description of what each function should return and pass:
Https://github.com/containernetworking/cni/blob/master/SPEC.md
Differences between CNI
Here are some of the most popular CNI:
Flannel
Flannel is a simple network and is the easiest setting option for overlay networks. Its functions include native networks, but its use in multiple networks is limited. For most users, Flannel is the default network under Canal, which is very easy to deploy and even has local network functions, such as host gateways. However, Flannel has some limitations, including the lack of support for network security policies and the lack of multi-network capabilities.
Calico
Calico and Flannel take a different approach. From a technical point of view, it is not an overlay network, but a system that configures routing between all related systems. To do this, Calico leverages the Border Gateway Protocol (BGP), which is used for Internet in a process called peering. Each party peering exchanges traffic and participates in the BGP network. The BGP protocol itself propagates routes under its ASN, except that they are private and do not need to be registered in RIPE.
However, in some cases, Calico can be used in conjunction with overlay networks, such as IPINIP. Used when nodes are on different networks to initiate the exchange of traffic between two hosts.
Canal
Canal is based on Flannel, but has some of Calico's own components, such as felix (host agent), which can take advantage of network security policies. These usually do not exist in Flannel. Therefore, it basically extends Flannel by adding security policies.
Multus
Multus is a CNI, but it is not really a network interface. It's just that it orchestrates multiple interfaces and does not configure the actual network, so Pod cannot communicate with Multus alone. In fact, Multus is the enabler of multi-device and multi-subnet networks. The following figure shows how it works. Multus itself basically calls the real CNI instead of kubelet and passes the result back to kubelet.
Kube-Router
Also worth mentioning is kube-router, which, like Calico, can be used with BGP and routing rather than overlay networks. Just like Calico, it can use IPINIP when necessary. It can also use ipvs for load balancing.
Set up multi-network K8S cluster
If you need to use multiple networks, you may need Multus.
Set up Multus
The first thing we need to do is set up Multus. We used almost the configuration from the Multus warehouse example, but made some important adjustments. See the example below.
The first is to adjust the configmap. Because we plan to create a default network using Flannel, we define the configuration in the delegates array of the Multus configuration. Some of the important settings marked in red here are "masterplugin": true, which defines the bridge of the Flannel network itself. You will learn why we need to do this in the next steps. In addition, you need to add an installation definition for the configuration map, and the others do not need to be adjusted, because for some reason, this example is not completed.
Another important thing about this configmap is that everything defined in this configmap is the default network, which is automatically installed into the container without further explanation. In addition, if you want to edit this file, note that you either need to terminate and rerun the daemon's container, or restart the node for the changes to take effect.
Example yaml file:
Set up the major Flannel Overlay network
For major Flannel networks, the setup is very simple. We can get the example from the Multus repository and deploy it. The adjustments made here are CNI installation, tolerance adjustments, and some adjustments to Flannel's CNI settings. For example, add "forceAddress": true and delete "hairpinMode": true.
This has been tested on clusters set up with RKE, but as long as you correctly install CNI from the host (in this case, / opt / cni / bin), it will work on other clusters.
Multus itself hasn't changed much. They only comment on the initcontainer configuration, which you can delete. This is because Multus will build its delegates and act as the primary "CNI".
This is the modified Flannel daemonset:
After deploying these samples, we have done a lot of work, and now it is time to assign an IP address to pod. Let's test it:
As you can see, we have successfully deployed Pod and assigned it IP 10.42.2.43 on the eth0 interface (the default interface). All other interfaces will be displayed as netX, or net1.
Set up a secondary network
The secondary network also needs to make some adjustments, which are based on the assumption that you want to deploy vxlan. In order to actually serve the secondary overlay, we need to change the VXLAN identifier "VIN", which is set to 1 by default, and our first overlay network now uses it. Therefore, we can change this setting by configuring the network on the etcd server. We use our own cluster etcd, marked green here (and assume that job is running on the host running the etcd client), and then load the key (marked red here) from the local host (in our case, store it on the local host) and store it in the / etc / kubernetes / ssl folder.
An example of a complete YAML file:
Next, we can actually deploy the secondary network. This setting is almost the same as the major network settings, but there are some key differences. The most obvious thing is that we changed the subnet, but we also need to change something else.
First, we need to set up a different dataDir, / var / lib / cni / flannel2, and a different subnetFile, / run/flannel/flannel2.env. This is necessary because they are already occupied by our main network. Next, we need to adjust the bridge because the main Flannel overlay network already uses kbr0.
The rest of the configuration that needs to be changed includes changing it to actually target our previously configured etcd server. In the main network, this is done through a direct connection to the K8S API via-kube-subnet-mgr flag. But we can't do that because we also need to modify the prefix to read. You can see the contents of the orange mark below, while the settings for the cluster etcd connection are shown in red. The final setting is to specify the subnet file again, marked green in the example. Finally, we added a network definition. The rest is the same as our main network configuration.
For the above steps, see the sample configuration file:
Once this is done, we are ready for the secondary network.
Assign additional network
Now that we are ready to assist the network, we need to assign him now. To do this, we need to define a NetworkAttachmentDefinition, which we can then use to assign the network to the container. Basically, this is a dynamic alternative to the configmap we set up before initializing Multus. In this way, we can install the required network on demand. In this definition, we need to specify the network type (in this case, Flannel) and the necessary configuration. This includes the subnetFile, dataDir, and bridge names mentioned earlier.
The last thing we need to determine is the name of the network, which we call flannel2.
Now we can finally use the secondary network to generate the first pod.
Now it's time to create a new Pod using the secondary network, and we treat those additional networks as additional network interfaces.
Successful, the secondary network assigns 10.5.22.4 as its IP address.
Troubleshooting
If the example does not work properly, you need to check the kubelet log.
A common problem is the lack of CNI. When I first tested it, I left out the CNI bridge because RKE did not deploy it. But this problem is very easy to solve.
External connections and load balancin
Now that we have set up and run the network, the next thing we need to do is to make our application accessible and configure it to be highly available and extensible. High availability and scalability can not only be achieved through load balancing, but also the key components we need to have.
Kubernetes has four concepts that make applications available externally.
Use load balancer
Ingress
Ingress is basically a load balancer with Layer7 function, especially HTTP (s). The most commonly used ingress controller is NGINX ingress. But it mainly depends on your needs and your usage scenarios. For example, you can also choose traefik or HA Proxy.
Configuring an ingress is very simple. In the following example, you will learn an example of a linked service. The blue indicates the basic configuration that points to the service. The green indicates the configuration required to link the SSL certificate (this certificate needs to be installed before that). Finally, you will see that some of the detailed settings for NGINX ingress have been adjusted.
Layer 4 load balancer
In Kubernetes, use type: LoadBalancer to define the Layer 4 load balancer, which is a service provider that relies on the load balancing solution. For local computers, there is a high probability of using a HA proxy or a routing solution. Cloud providers will use their own solutions and dedicated hardware, or they can use HA agents or routing solutions.
The biggest difference is that the layer 4 load balancer does not understand the advanced application protocol (layer 7) and can only forward traffic. Most load balancers at this level also support SSL termination. This usually needs to be configured through annotations and has not yet been standardized.
Use the {host,node} port
{host,node} Port is basically equivalent to docker-p port:port, especially hostPort. Unlike hostPort, nodePort is available on all nodes, not only on nodes running pod. For nodePort,Kubernetes, first create a clusterIP, and then load balance the traffic through that port. NodePort itself is simply an iptable rule that forwards traffic on the port to clusterIP.
Except for quick testing, nodePort is rarely used, and nodePort is used in production only if you want each node to expose the port (that is, for monitoring). Most of the time, you need to use the Layer 4 load balancer. HostPort is only used for testing, or on rare occasions, the pod is pasted to a specific node and published under a specific IP address that points to that node.
For example, hostPort is defined in the container specification, as follows:
What is ClusterIP?
ClusterIP is the internally accessible IP of the Kubernetes cluster and all of its services. The IP itself will load balance traffic to all Pod that match its selector rules. In many cases, for example, when you specify a type: LoadBalancer service or set nodePort, the clusterIP is also generated automatically. The reason behind this is that all load balancers are carried out through clusterIP.
As a concept, clusterIP is designed to solve the problem of multiple addressable hosts and the efficient updates of these hosts. It is much easier to have an immutable single IP than to retrieve data for all the properties of the service through service discovery all the time. Although it is sometimes more appropriate to use service discovery in some situations, if you want explicit control, clusterIP is recommended, such as in some micro-service environments.
Common faults
If you use a public cloud environment and set up hosts manually, your cluster may lack firewall rules. For example, in AWS, you will need to adjust the security group to allow inter-cluster communication as well as ingress and egress. Failure to do so will cause the cluster to fail to run. Ensure that the necessary ports between the primary node and the worker node are always open. The same is true for ports that are opened directly on the host, that is, hostPort or nodePort.
network security
Now that we have set up all the Kubernetes networks, we also need to make sure that they have some security. The minimum principle to ensure security is to provide the application with the minimum number of visits it needs to run. This ensures to some extent that even in the event of a security breach, it will be difficult for an attacker to dig deep into your network. Although it does not fully guarantee your security, it will undoubtedly make it more difficult and time-consuming for attackers to attack. This is important because it will give you more time to react and prevent further damage. Here is a typical example of a combination of different exploits/ vulnerabilities for different applications, which allows attackers to attack only if they reach any attack surface from multiple dimensions (for example, network, container, host).
The choice here is either to make use of network policies or to seek third-party security solutions to achieve container network security. With a network policy, we have a solid foundation to ensure that traffic occurs only where traffic should flow, but this applies only to a few CNI. For example, they can be used with Calico and Kube-router. Flannel doesn't support it, but fortunately, you can move to Canal, which allows Flannel to use Calico's network policy feature. For most other CNI, there is no support, and there is no supported plan at this time.
But that's not the only problem. The problem is that network policy rules are only firewall rules for specific ports, and it is very simple. This means that you cannot apply any advanced settings. For example, if you find a container suspicious, you can't stop it on demand. Furthermore, network rules do not understand traffic, so you do not know where the traffic is going, and you are limited to creating rules at layers 3 and 4. Finally, it cannot detect network-based threats or attacks, such as DDoS,DNS,SQL injection and other destructive network attacks that can occur even on trusted IP addresses and ports.
Therefore, we need a dedicated container network security solution that provides the required security for critical applications, such as financial or compliance-driven applications. Personally, I like NeuVector. It has the container firewall solution I deployed at Arvato / Bertelsmann and provides the Layer7 visibility and protection we need.
It should be noted that any network security solution must be cloud-native and can be automatically extended and adjusted. When you deploy a new application or extend Pod, you don't need to check iptable rules or update anything. You may be able to manage simple application stacks on several nodes manually, but for any enterprise, deploying security cannot slow down the CI / CD pipeline.
In addition to security and visibility, I have found that having connectivity and packet-level container networking tools helps debug applications during testing and staging. With the Kubernetes network, unless you can see the traffic, you will never really determine where all packets are going and which Pod is routed to them.
Some suggestions for choosing Network CNI
Now that you've introduced Kubernetes networking and CNI, there's always a big question: which CNI solution should you choose? I will try to provide some advice on how to make this decision.
First, define the problem
The first thing about each project is to define the problem you need to solve first in as much detail as possible. You may want to know which applications to deploy and what kind of load they will generate. Some of the questions you might ask yourself:
My application:
Is the network busy?
Is it sensitive to delay?
Is it a single architecture?
Or micro-service architecture service?
Do you need to be on multiple networks?
Can I withstand downtime, even the minimum downtime?
This is a very important question because you need to determine it in advance. If you choose a solution now and switch later, you need to reset the network and redeploy all containers. Unless you already have something like Multus and can use multiple networks, this will mean that your service will be down. In most cases, if you have a planned maintenance period, things will be less serious, but as the application iterates, zero downtime becomes more important!
My application is on multiple networks
This situation is very common in local installations. in fact, if you only want to separate traffic through private and public networks, it requires you to set up multiple networks or intelligent routing.
Do I need some specific features in CNI?
Another thing that affects your decision is that you need specific features that are available in some CNI and not in others. For example, you want to use Weave or you want to do more mature load balancing through ipvs.
What network performance is required?
If your application is delay-sensitive or the network is busy, you need to avoid using any overlay network. Overlay is not cost-effective in terms of performance, nor is it in scale. In this case, the only way to improve network performance is to avoid overlay and switch to network utilities such as routing. When looking for network performance, you have several options, such as:
Ipvlan: it has good performance, but it should be noted that you cannot use macv {tap,lan} on the same host at the same time.
Calico: this CNI is not the most user-friendly, but it provides you with better performance than vxlan and can be extended without worry.
Kube-Router: it provides better performance by using BGP and routing, as well as support for LVS/IPVS (similar to Calico). But Calico is more mature than it.
Cloud provider solutions: some cloud providers provide their own network solutions, which need to be determined on a case-by-case basis, and cannot be generalized here. It is worth mentioning that Submariner, an open source project of Rancher. It supports cross-cluster network connections between multiple Kubernetes clusters and creates the necessary tunnels and paths to provide network connectivity for micro-services deployed in multiple Kubernetes clusters that need to communicate with each other.
This is the end of "how to use Kubernetes Network". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.