Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How does kube-proxy work in kubernetes?

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "how does kube-proxy work in kubernetes". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how kube-proxy works in kubernetes.

Necessary instructions for kube-proxy & service

When it comes to kube-proxy, we have to mention service in k8s. Here's a brief description of both of them:

Kube-proxy is actually the access entry for managing service, including the access from Pod to Service within the cluster and access to service outside the cluster.

Kube-proxy manages the Endpoints of the sevice. The service exposes a Virtual IP and becomes a Cluster IP. By accessing this Cluster IP:Port in the cluster, you can access the Pod under the corresponding serivce in the cluster.

Service is a set of Pods service abstractions selected by Selector, which is actually a micro-service, which provides the ability of LB and reverse proxy of the service, and the main role of kube-proxy is to be responsible for the implementation of service.

Another important role of service is that the Pods of a service backend may change with the survival and demise of IP. The emergence of service provides a fixed IP for the service, ignoring the changes of the backend Endpoint.

Service discovery

K8s provides two ways to make service discovery:

Environment variables: when you create a Pod, kubelet injects the relevant environment variables of all the Service in the cluster into the Pod. It is important to note that for an environment variable to inject a Service into a Pod, the Service must be created before the Pod. This almost makes this approach to service discovery unavailable.

For example, if a Service whose ServiceName is redis-master and the corresponding ClusterIP:Port is 10.0.0.11 ClusterIP:Port 6379, the corresponding environment variable is:

REDIS_MASTER_SERVICE_HOST=10.0.0.11 REDIS_MASTER_SERVICE_PORT=6379 REDIS_MASTER_PORT=tcp://10.0.0.11:6379 REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379 REDIS_MASTER_PORT_6379_TCP_PROTO=tcp REDIS_MASTER_PORT_6379_TCP_PORT=6379 REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11

DNS: this is also a way that K8S officially strongly recommends. You can easily create a KubeDNS through cluster add-on to discover services in the Service in the cluster. For more information about KubeDNS, please check out my blog: Kubernetes DNS Service Technology Research, which I won't repeat here.

Publish (expose) service

Native to K8s, a Service ServiceType determines how it publishes its services.

ClusterIP: this is the default ServiceType for K8s. Publish services internally through ClusterIP within the cluster.

NodePort: this method is commonly used to expose Service outside the cluster. You can access the Endpoint of the corresponding Service backend by accessing each NodeIP:NodePort in the cluster.

LoadBalancer: this is also used to expose services outside the cluster, except that it requires the support of Cloud Provider, such as AWS.

ExternalName: this is also used to publish the service in the cluster. It needs the support of KubeDNS (version > = 1.7), that is, use KubeDNS to make a Map,KubeDNS of the service and ExternalName and return a CNAME record.

# # Internal principle of kube-proxy

Kube-proxy currently implements two types of proxyMode:userspace and iptables. Where userspace mode is the default mode for v1.0 and earlier, iptables mode has been added since v1.1, and userspace has been officially replaced as the default mode in v1.2.

# userspace mode userspace implements the proxy service of service through kube-proxy in user space. Needless to say, the principle is as follows:

It can be seen that the biggest problem with this kind of mode is that the request of service will first enter the kernel iptables from user space, and then go back to user space. Kube-proxy will complete the selection and proxy of the back-end Endpoints, so the performance loss caused by traffic flowing into and out of the kernel from user space is unacceptable. This is also the biggest challenge to kube-proxy in K8s v1.0 and previous versions, so the community began to study iptables mode.

# Example

$kubectl get serviceNAME LABELS SELECTOR IP (S) PORT (S) kubernetes component=apiserver,provider=kubernetes 10.254.0.1 443/TCPssh-service1 name=ssh Role=service ssh-service=true 10.254.132.107 2222 Universe TCP$ kubectl describe service ssh-service1Name: ssh-service1Namespace: defaultLabels: name=ssh Role=serviceSelector: ssh-service=trueType: LoadBalancerIP: 10.254.132.107Port: 2222/TCPNodePort: 30239/TCPEndpoints: Session Affinity: NoneNo events.

NodePort works in much the same way as ClusterIP. A request sent to a NodeIP:NodePort is redirected to the corresponding port of the kube-proxy (a random port on the Node) through iptables, and then the kube-proxy sends the request to one of the Pod:TargetPort.

Here, if the ip of Node is 10.0.0.5, the corresponding iptables is as follows:

$sudo iptables-St nat...-A KUBE-NODEPORT-CONTAINER-p tcp-m comment-- comment "default/ssh-service1:"-m tcp-- dport 30239-j REDIRECT-- to-ports 36463Mub A KUBE-NODEPORT-HOST-p tcp-m comment-- comment "default/ssh-service1:"-m tcp-- dport 30239-j DNAT-- to-destination 10.0.0.5 Switzerland 36463A KUBE-PORTALS-CONTAINER-d 10.254.132.107 tcp 32- P tcp-m comment-- comment "default/ssh-service1:"-m tcp-- dport 2222-j REDIRECT-- to-ports 36463 A KUBE-PORTALS-HOST-d 10.254.132.107 comment 32-p tcp-m comment-- comment "default/ssh-service1:"-m tcp-- dport 2222-j DNAT-- to-destination 10.0.0.5Switzerland 36463

Visible: access to port 10.0.0.5 30239 will be forwarded to port 36463 (random listening port) on node. When accessing port 2222 of clusterIP 10.254.132.107, the request is also forwarded to local port 36463. Port 36463 is actually monitored by kube-proxy, directing traffic to the back-end pod.

# another mode of iptables mode is iptables, which makes full use of kernel iptables to implement service proxy and LB. Is the default mode for v1.2 and later, and its schematic is as follows:

Because iptables mode uses iptable NAT to complete forwarding, there is also a performance loss that can not be ignored. In addition, if there are tens of thousands of Service/Endpoint in the cluster, then the iptables rules on the Node will be very large, and the performance will be further reduced.

This also leads to, at present, when most enterprises produce on K8s, they will not directly use kube-proxy as a service proxy, but through their own development or through Ingress Controller to integrate HAProxy, Nginx to replace kube-proxy.

# Example iptables is implemented by using nat forwarding of iptables of linux.

ApiVersion: v1kind: Servicemetadata: labels: name: mysql role: service name: mysql-servicespec: ports:-port: 3306 targetPort: 3306 nodePort: 30964 type: NodePort selector: mysql-service: "true"

The port exposed by the nodePort for mysql-service is 30964, the port for the corresponding cluster IP (10.254.162.44) is 3306, and the port corresponding to the back-end pod is 3306.

The two pod,ip proxied by the mysql-service backend are 192.168.125.129 and 192.168.125.131. Let's take a look at iptables first.

$iptables-St nat...-A PREROUTING-m comment-comment "kubernetes service portals"-j KUBE-SERVICES-An OUTPUT-m comment-comment "kubernetes service portals"-j KUBE-SERVICES-A POSTROUTING- m comment-- comment "kubernetes postrouting rules"-j KUBE-POSTROUTING-A KUBE-MARK-MASQ-j MARK--set-xmark 0x4000/0x4000-A KUBE-NODEPORTS-p tcp-m comment-comment "default/mysql-service:"-m tcp-dport 30964-j KUBE -MARK-MASQ-A KUBE-NODEPORTS-p tcp-m comment-- comment "default/mysql-service:"-m tcp-- dport 30964-j KUBE-SVC-67RL4FN6JRUPOJYM-A KUBE-SEP-ID6YWIT3F6WNZ47P-s 192.168.125.129 comment 32-m comment-- comment "default/mysql-service:"-j KUBE-MARK-MASQ-A KUBE-SEP-ID6YWIT3F6WNZ47P-p tcp-m comment-- comment "default/mysql-service:"-m tcp-j DNAT-- to-destination 192.168 .125.129: 3306A KUBE-SEP-IN2YML2VIFH5RO2T-s 192.168.125.131to-destination 32-m comment-- comment "default/mysql-service:"-j KUBE-MARK-MASQ-A KUBE-SEP-IN2YML2VIFH5RO2T-p tcp-m comment-- comment "default/mysql-service:"-m tcp-j DNAT-- to-destination 192.168.125.131v3306Mub A KUBE-SERVICES-d 10.254.162.44 to-destination 32-p tcp-m comment-- comment "default/mysql -service: cluster IP "- m tcp-- dport 3306-j KUBE-SVC-67RL4FN6JRUPOJYM-A KUBE-SERVICES-m comment-- comment" kubernetes service nodeports NOTE: this must be the last rule in this chain "- m addrtype-- dst-type LOCAL-j KUBE-NODEPORTS-A KUBE-SVC-67RL4FN6JRUPOJYM-m comment-- comment" default/mysql-service: "- m statistic-- mode random-- probability 0.50000000000-j KUBE-SEP-ID6YWIT3F6WNZ47P-A KUBE-SVC-67RL4FN6JRUPOJYM-m comment-- comment" default/mysql-service: "- j KUBE-SEP-IN2YML2VIFH5RO2T

First, if you are accessing through port 30964 of node, you will enter the following chain:

-A KUBE-NODEPORTS-p tcp-m comment-- comment "default/mysql-service:"-m tcp-- dport 30964-j KUBE-MARK-MASQ-A KUBE-NODEPORTS-p tcp-m comment-- comment "default/mysql-service:"-m tcp-- dport 30964-j KUBE-SVC-67RL4FN6JRUPOJYM

Then jump further to the chain of KUBE-SVC-67RL4FN6JRUPOJYM:

-A KUBE-SVC-67RL4FN6JRUPOJYM-m comment-- comment "default/mysql-service:"-m statistic-- mode random-- probability 0.50000000000-j KUBE-SEP-ID6YWIT3F6WNZ47P-A KUBE-SVC-67RL4FN6JRUPOJYM-m comment-- comment "default/mysql-service:"-j KUBE-SEP-IN2YML2VIFH5RO2T

Taking advantage of the-probability feature of iptables, the connection has a 50% probability of entering the KUBE-SEP-ID6YWIT3F6WNZ47P chain and a 50% probability of entering the KUBE-SEP-IN2YML2VIFH5RO2T chain.

The specific purpose of the chain of KUBE-SEP-ID6YWIT3F6WNZ47P is to send requests over DNAT to port 3306 of 192.168.125.129.

-A KUBE-SEP-ID6YWIT3F6WNZ47P-s 192.168.125.129 comment 32-m comment-- comment "default/mysql-service:"-j KUBE-MARK-MASQ-A KUBE-SEP-ID6YWIT3F6WNZ47P-p tcp-m comment-- comment "default/mysql-service:"-m tcp-j DNAT-- to-destination 192.168.125.129default/mysql-service 3306

In the same way, the function of KUBE-SEP-IN2YML2VIFH5RO2T is to send to port 3306 of 192.168.125.131 via DNAT.

-A KUBE-SEP-IN2YML2VIFH5RO2T-s 192.168.125.131 comment 32-m comment-- comment "default/mysql-service:"-j KUBE-MARK-MASQ-A KUBE-SEP-IN2YML2VIFH5RO2T-p tcp-m comment-- comment "default/mysql-service:"-m tcp-j DNAT-- to-destination 192.168.125.1313306

After analyzing the way nodePort works, let's talk about the access mode of clusterIP. Port 3306 that directly accesses cluster IP (10.254.162.44) jumps directly to KUBE-SVC-67RL4FN6JRUPOJYM.

-A KUBE-SERVICES-d 10.254.162.44 tcp 32-p tcp-m comment-- comment "default/mysql-service: cluster IP"-m tcp-- dport 3306-j KUBE-SVC-67RL4FN6JRUPOJYM

The next jump mode is the same as the NodePort mode.

At this point, I believe you have a deeper understanding of "how kube-proxy works in kubernetes". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report