In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article shows you the principle of iptable cluster ip implementation in k8s-service, which is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
Here we mainly introduce the load balancing within the cluster. For services in a K8s cluster, we need to access each other. Generally, we create a corresponding service for them, and we generally set the service type within the cluster to cluster ip. For a cluster ip, multiple endpoints will be associated, that is, the actual pod. Access to cluster ip, that is, multiple endpoints access to cluster ip associations. With regard to the traffic load balancing of cluster ip and endpoints, there are generally iptable and ipvs methods, which have been introduced in previous articles. Here we mainly introduce the implementation of iptable with practical examples. In addition, cluster ip is a virtual ip, which means that the ip is not bound to any device, so you won't get a reply when you give a command such as ping or traceroute to the ip.
Looking at the service of the deployed nginx application, we can see:
This service is of type cluster ip
Cluster ip is 10.254.226.173
This cluster ip is associated with 2 endpoints:10.1.27.4:80 and 10.1.79.3 80
Kubectl describe service service-nginx-app-n default
View the nat table for host network namespace iptable:
Iptables-nvL-t nat
For PREROUTING chain, all traffic goes to KUBE-SERVICES, the target. Note that PREROUTING chain is the first entry after the traffic arrives. Imagine running the command curl http://10.254.226.173 in pod. According to the internal routing table of the container described in the previous article, the packet should flow like this:
In pod, according to the routing table, it is found that cluster ip (10.254.226) takes the default route and chooses the default gateway.
In pod, the ip address of the default gateway is the ip address of the docker0 that hosts the netwok namespace, and the default gateway is a directly connected route.
In pod, eth0 device is used to send data according to the routing table. According to the previous article, eth0 is essentially veth pair on one end of the pod network namespace and attach on the docker0 bridge of the host netwok namespace.
According to the veth pair introduced in the previous article, the data is sent from one end of the pod network namespace to the other end of the attached to the docker0 bridge.
After docker0 bridge received the data, he naturally came to host network namesapce's PREROUTING chain.
View KUBE-SERVICES target:
Iptables-nvL-t nat | grep KUBE-SVC
In KUBE-SERVICES target, we can see that the matching target with the destination address cluster ip10.254.226.173 is KUBE-SVC-ETZVW7ENORYJBYB4.
View KUBE-SVC-ETZVW7ENORYJBYB4 target:
Iptables-nvL-t nat
In KUBE-SVC-ETZVW7ENORYJBYB4 target, we can see:
There are two target, KUBE-SEP-L6A5J2X5SCQGCA7Z and KUBE-SEP-U7JQ3R4SRIDMQ4UH
There is statistic mode random probability 0.5 in KUBE-SEP-L6A5J2X5SCQGCA7Z
For statistic mode random probability 0.5, the iptable kernel random module is utilized, and the random ratio is 0.5, or 50%.
So what the above means is that half of the random ratio goes into the KUBE-SEP-L6A5J2X5SCQGCA7Z target, then naturally the random ratio of the other target is also half.
Thus it can be seen that the load balancing is achieved evenly through random modules.
View KUBE-SEP-L6A5J2X5SCQGCA7Z and UBE-SEP-L6A5J2X5SCQGCA7Z target:
Iptables-nvL-t nat
In these two target, we can see:
MASQ operations have been done respectively. Of course, this should be outbound engress traffic (limited to source ip), not our inbound ingress traffic.
The DNAT operation is done, and the original cluster ip is converted to DANT into ip 10.1.27.4 and 10.1.79.3 of pod. Converted the original port into 80 port.
After this series of iptable target, our original request 10.254.226.173 80 becomes 10.1.27.4 80 or 10.1.79.3 80, and the probability of each transformation is 50%.
According to iptable, 10.1.27.4 or 10.1.79.3 after PREROUTING chain discovers that DNAT is not a local ip (certainly not, because these two ip are ip of pod, certainly not in host's network namespace). So we go to Forwarding chain and decide the next-hop address according to the routing table of host network namespace.
So to sum up the above example, the service of cluster-ip type in the k8s cluster of ipable mode is summarized as follows:
Traffic goes from pod network namespace to host netwok namespace's docker0.
A series of target passes through the PREROUTING chain of host netwok namespace.
In these target, the matching endpoint target is realized according to the random module of the iptable kernel, and the random ratio is evenly distributed to achieve uniform load balancing.
DNAT is implemented in endpoint target, that is, the ip that translates the destination address cluster ip into the actual pod.
Cluster ip is a virtual ip and is not bound to any device.
Load balancing is implemented in the kernel, using uniform load balancing, and there can be no custom load balancing algorithm.
Host is required to enable route forwarding (net.ipv4.ip_forward = 1).
After the packet is translated and DNAT in the host netwok namespace, the next-hop address is determined by the routing table of the host network namespace.
The above is the principle of iptable cluster ip implementation in k8s-service. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.