Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the four scenarios of Kubernetes network

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly explains "what are the four scenarios of Kubernetes network". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what are the four scenarios of Kubernetes network".

In the actual business scenario, the relationship between business components is very complex, especially when the concept of micro-service is proposed, the granularity of application deployment is more small and flexible. In order to support the communication of business application components, the Kubernetes network is designed to address the following scenarios:

(1) tightly coupled container-to-container direct communication

(2) Communication between abstract Pod and Pod

(3) Communication between Pod and Service

(4) the communication between the external and internal components of the cluster.

1. Container-to-container communication

Containers in the same Pod (containers in Pod do not span hosts) share the same network namespace and share the same Linux protocol stack. So for all kinds of network operations, just as they are on the same machine, they can even access each other's ports with localhost addresses. The result is simplicity, security, and efficiency, as well as reducing the difficulty of porting existing programs from physical or virtual machines to containers.

The shaded part of the figure below is an instance of Pod running on Node. Container 1 and container 2 share a network namespace, and the result of sharing a namespace is as if they are running on the same machine. The ports they open will not conflict and can communicate directly with Linux's local IPC. They only need to use localhost to access each other.

Container-to-container communication

2. Communication between Pod

Each Pod has a real global IP address. Different Pod in the same Node can directly use the IP address of the room Pod to communicate without using other discovery mechanisms, such as DNS, Consul or etcd. Pod may run on either the same Node or an unused Node, so communication can be divided into two types: communication between Pod within the same Node and communication between Pod on different Node.

1) Communication between Pod within the same Node

As shown in the figure, you can see that both Pod1 and Pod2 are connected to the same Docker0 bridge through Veth. Their IP addresses IP1 and IP2 are automatically obtained from the network segment of Docker0, and they belong to the same network segment as the IP3 of the bridge itself. In addition, on the Linux stack of Pod1 and Pod2, the default route is the address of Docker0, that is, all non-local network data will be sent to the Docker0 bridge by default and transferred directly by the Docker0 bridge, and they can communicate directly.

Pod relationship within the same Node

2) Communication between Pod on different Node

The address of Pod is in the same network segment as Docker0. We know that the Docker0 network segment and the host network card are two completely different IP network segments, and the communication between different Node can only be carried out through the physical network card of the host. Therefore, in order to realize the communication between Pod containers located on different Node, we must find a way to address and communicate through this IP address of the host. On the other hand, these so-called "private" IP addresses that are dynamically assigned and hidden behind Docker0 can also be found. Kubernetes records all IP allocation information that is running Pod and saves this information in etcd (as the Endpoint of Service). This private IP information is also important for Pod-to-Pod communication, because our network model requires Pod to Pod to communicate using private IP. As mentioned earlier, Kubernetes's network is flat and direct to the address of Pod, so the IP planning of these Pod is also very important and there can be no conflicts. To sum up, to support communication between Pod on different Node, two conditions must be met:

(1) Planning for Pod allocation in the entire Kubernetes cluster without conflicts

(2) find a way to associate the IP of the Pod with the IP of the Node, so that the Pod can access each other.

According to the requirement of condition 1, when deploying Kubernetes, we need to plan the IP address of Docker0 to ensure that there is no conflict between Docker0 addresses on each Node. We can manually assign it to each Node after planning, or make an allocation rule that will be allocated by the installed program itself. For example, Kubernetes's network-enhanced open source software Flannel can manage the allocation of resource pools.

According to the requirement of condition 2, when the data in the Pod is sent, there needs to be a mechanism to know which specific Node the IP address of the other Pod is on. In other words, we should first find the IP address of the Node corresponding host, send the data to the network card of the host, and then transfer the corresponding data to the specific Docker0 on the host. Once the data reaches the host Node, the Docker0 inside the Node knows how to send the data to the Pod.

The details are shown in the following figure.

Pod communication across Node

In figure 6, IP1 corresponds to Pod1,IP2 and corresponds to Pod2. When Pod1 accesses Pod2, it first sends the data from the eth0 of the source Node to find and arrive at the eth0 of Node2. That is to say, first from IP3 to IP4, and then from IP4 to IP2.

3. Communication between Pod and Service

In order to support the horizontal expansion and high availability of clusters, Kubernetes abstracts the concept of Service. Service is an abstraction of a set of Pod that it accesses according to the access policy (LB).

When creating a service, Kubernetes assigns a virtual IP address to the service. The client accesses the service by accessing the virtual IP address, and the service is responsible for forwarding the request to the backend Pod. This is similar to a reverse proxy, but is somewhat different from an ordinary reverse proxy: first, its IP address is virtual, and it takes some skill to access it from the outside; second, its deployment and start and stop are managed automatically by Kubernetes.

In many cases, Service is just a concept, and what really implements the role of Service is the kube-proxy services process behind it. A kube-proxy service process runs on each Node of the Kubernetes cluster, which can be thought of as a transparent proxy and load balancer for Service, and its core function is to forward access requests to a Service to multiple Pod instances at the back end. For each Kubernetes Service,kube-proxy of TCP type, a SocketServer is established on the local Node to receive requests, and then sent evenly to the port of a backend Pod. This process defaults to the RoundRobin load balancing algorithm. Kube-proxy and back-end Pod communicate in exactly the same way as standard Pod-to-Pod communications. In addition, Kubernetes also provides directional forwarding of session persistence by modifying the value of Service's service.spec.-sessionAffinity parameter. If the value is set to "ClientIP", all requests from the same ClientIP will be forwarded to the same back-end Pod. In addition, the concepts of ClusterIP and NodePort of Service are realized by kube-proxy through the transformation of Iptables and NAT. Kube-proxy dynamically creates Iptables rules related to Service during operation, and these rules realize the function of redirecting the request traffic of ClusterIP and NodePort to the proxy port of the corresponding service on the kube-proxy process. Because the Iptables mechanism is aimed at the local kube-proxy port, if the Pod needs to access the Service, it must run kube-proxy on the Node on which it resides, and the kube-proxy component must run on the Node of each Kubernetes. Within the Kubernetes cluster, access to Service Cluster IP and Port can be done on any Node, because the kube-proxy on each Node sets the same forwarding rules for that Service.

To sum up, due to the role of kube-proxy, the client does not need to care about how many Pod there are in the back end during the call of Service, and the communication, load balancing and fault recovery of the intermediate process are transparent, as shown in the following figure.

Load balancing forwarding of Service

Requests to access Service, whether by Cluster IP+Target Port or by node IP+Node Port, will be redirected to the kube-proxy listening Service service proxy port by the Iptables rules of the node machine. How does Kube-proxy choose the backend Pod when it receives an access request from Service?

First of all, the current load balancing of kube-proxy only supports the Round Robin algorithm. The algorithm selects members one by one according to the list of members, and if one cycle ends, it starts the next round from the beginning, and so on. Kube-proxy 's load balancer also supports Session retention based on the Round Robin algorithm. If Service specifies Session persistence in the definition, kube-proxy receives the request looking for the existence of an affinityState object from the request IP from local memory, and if the object exists and the Session does not time out, the kube-proxy redirects the request to the back-end Pod that the affinityState points to. If there is no affinityState object from the request IP locally, record the IP of the request and the Endpoint that it points to. The subsequent request will stick to the created affinityState object, which implements the client-side IP session persistence function.

Next, let's take a closer look at the implementation details of kube-proxy. The kube-proxy process establishes a "service proxy object" for each Service, which is a data structure within the kube-proxy program that includes a randomly selected local idle port on the Socket-Server,SocketServer that listens for this service request. In addition, a "load balancer component" has been built within kube-proxy to implement load balancing and session persistence capabilities between connections to multiple back-end Pod connections received on the SocketServer.

Kube-proxy implements its main functions by querying and listening for changes in Service and Endpoint in API Server, including opening a local proxy object for the newly created Service (proxy object is a data structure within the kube-proxy program, a Service port is a proxy object, including a SocketServer for listening for service requests), receive requests, and kube-proxy will process the changed Service list one by one. The following is the specific processing flow:

(1) if the Service does not set the cluster IP (ClusterIP), no processing is done, otherwise, get the list of all port definitions of the Service (spec.ports field)

(2) read the port information in the service port definition list one by one, and determine whether the corresponding service proxy object already exists locally according to the port name, Service name and Namespace. If it does not exist, create a new one. If it exists and the Service port has been modified, delete the rules related to the Service in Iptables, close the service proxy object, and then go through the new process. That is, the service proxy object is assigned to the Service port and the relevant Iptables rules are created for the Service.

(3) update the forwarding address table of the corresponding Service in the load balancer component, and determine the session persistence policy for the newly created Service.

(4) clean up the deleted Service.

The interaction process between Kube-proxy and APIServer

4. External to internal access

As a basic resource object, Pod is not only accessed by Pod within the cluster, but also used externally. Service is an abstraction of a set of Pod with the same function, and it is the most appropriate granularity to provide services as a unit.

Because the IP assigned to the Service object in the Cluster IP Range pool can only be accessed internally, it is accessible to all other Pod. But if this Service is to serve clients outside the cluster as a front-end service, you need to be able to see it externally.

Kubernetes supports the Type definition of Service for two external services: NodePort and LoadBalancer.

(1) NodePort

When you specify spec.type=NodePort when defining Service, and specify the value of spec.ports.nodePort, the system opens a real port number on the host on each Node in the Kubernetes cluster. In this way, clients that can access the Node can access the internal Service through this port number.

(2) LoadBalancer

If the cloud service provider supports external load balancer, you can define Service through spec.type=LoadBalancer, and you need to specify the IP address of the load balancer. To use this type, you need to specify the NodePort and ClusterIP of Service.

Requests for access to this Service will be forwarded to the backend Pod through LoadBalancer. The implementation of load distribution depends on the implementation mechanism of LoadBalancer provided by cloud service providers.

(3) external access to internal Service principle

We access the inside of the cluster from the outside of the cluster, and eventually fall on the specific Pod. The way to use NodePort is to open up kube-proxy, use Iptables to set rules for the NodePort of the service, and transfer access to Service to kube-proxy, so that kube-proxy can access a set of Pod at the back end in the same way as internal Pod access service. This pattern uses kube-proxy as a load balancer to handle external access to services and further to Pod. The external equalizer mode is more commonly used. A common implementation is to use an external load balancer that targets all nodes in the cluster. When network traffic is sent to an LoadBalancer address, it recognizes that it is part of a service and routes it to the appropriate back-end Pod.

So there are many different combinations to access internal Pod resources from the outside.

There is no load balancer on the outside, so access the internal Pod directly.

There is no load balancer outside, so you can access Pod directly by accessing the internal load balancer.

There is a load balancer outside, which directly accesses the internal Pod through the external load balancer.

There is a load balancer on the outside, which accesses the internal Pod by accessing the internal load balancer.

The scenario of the first case is very rare and is only needed on special occasions. In the actual production project, we need to access the launched Pod one by one and send them a refresh instruction. This method is used only in this case. This requires the development of additional programs to read the list of Endpoint under Service and communicate with these Pod one by one. This type of communication is usually avoided, for example, by pulling commands from the data source in the set for each Pod, rather than pushing commands to it. Because the start and stop of each Pod is dynamic, if you rely on the specific Pod, it is equivalent to bypassing the Service mechanism of Kubernetes, although it can be realized, but it is not ideal.

The second case is the NodePort approach, where external applications directly access Service's NodePort and access internal Pod through Kube-proxy, a load balancer.

The third case is the LoadBalancer mode, because the external LoadBalancer is a load balancer with Kubernetes knowledge, which listens to the creation of the Service to know the backend Pod start and stop changes, so it has the ability to communicate with the backend Pod. But one problem to note here is that the load balancer needs to have a way to communicate directly with Pod. In other words, this external load balancer is required to use the same communication mechanism as Pod to Pod.

The fourth case is rarely used because you need to go through two levels of load balancing devices, and network calls are more difficult to track after two random load balancings. When problems are wrong in the actual production environment, it is difficult to track the flow process of network data.

(4) external hardware load balancer mode

In many actual production environments, because Kubernetes clusters are deployed in a private cloud environment, traditional load balancers are not aware of Service. In fact, we only need to solve two problems to turn it into a Service-aware load balancer, which is an ideal mode for external access to the inside of a Kubernetes cluster in a real system.

By writing a program to listen for changes in Service, the changes are written to the load balancer as rules according to the communication interface of the load balancer.

Provide the load balancer with direct access to the Pod.

This process is illustrated in the figure below.

Custom external load balancer accesses Service

A Service Agent is provided here to realize the perception of Service changes. The Agent can monitor the changes of Service and Endpoint directly from the etcd or call API Server through the interface, and write the changes to the external hardware load balancer.

At the same time, every Node runs a software with Route Discovery Protocol, which is responsible for multicasting all addresses on this Node to other hosts in the network through Route Discovery Protocol, including hardware load balancers. This allows the hardware load balancer to know which Node the IP address of each Pod instance is on. Through the above two steps, a hardware-based externally Service-aware load balancer is established.

Thank you for your reading. the above is the content of "what are the four scenarios of the Kubernetes network". After the study of this article, I believe you have a deeper understanding of what the four scenarios of the Kubernetes network are, and the specific usage needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report