In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
How to use LB through Pod on CCS TKE. In view of this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
What is LB through Pod?
Kubernetes officially provides Service of NodePort type, that is, opening the same port for all nodes to expose this Service. The traditional implementation of LoadBalancer type Service on many clouds is also based on NodePort, that is, the LB backend binds the NodePort,LB of each node to receive external traffic, forwards it to the NodePort of one of the nodes, and then forwards it to Pod using iptables or ipvs through the load balance within Kubernetes:
TKE's default LoadBalancer type Service and default Ingress are also implemented in this way, but it also supports the direct connection of LB to Pod, that is, the Pod IP+Port is directly bound to the LB backend and the node's NodePort is not bound:
Why do I need LB to go through to Pod?
It is the simplest and most common way for LB to bind NodePort directly to implement Ingress or LoadBalancer type Service on the cloud, so why is it not enough to have this kind of implementation? why is it necessary to build a direct LB-to-Pod mode?
First of all, let's analyze some problems in the traditional implementation of NodePort:
After the traffic is forwarded from LB to NodePort, it needs to be SNAT, and then forwarded to Pod, which will cause some additional performance loss.
If traffic is too concentrated on several NodePort (for example, using nodeSelector to deploy gateways to fixed nodes), it may result in source port exhaustion or conntrack insertion conflicts.
NodePort itself also acts as a load balancer. Binding LB to too many nodes of NodePort may cause the load balancing state to be too scattered, resulting in global load imbalance.
If you use LB through Pod, all of the above problems will disappear, and there are some other benefits:
Since there is no SNAT, externalTrafficPolicy: Local is no longer required to get the source IP.
It is easier to implement session persistence, you only need to let CLB enable session persistence, and you do not need to set the sessionAffinity of Service.
Therefore, scenarios that use LB to connect to Pod are usually as follows:
Get the real source IP of the client at layer 4, but not by using externalTrafficPolicy: Local.
Hope to further improve the performance of the network.
Make it easier to keep the conversation going.
Solve the load imbalance of global connection scheduling.
What are the prerequisites?
To use LB through Pod, the following prerequisites need to be met:
The Kubernetes cluster version needs to be higher than 1.12, because LB binds Pod directly to check whether Pod is Ready. In addition to seeing whether Pod is Running and passing readinessProbe, you also need to see whether LB's health detection of Pod is passed, which depends on the ReadinessGate feature, which is not supported until Kubernetes 1.12.
VPC-CNI elastic Nic must be enabled in cluster network mode, because the current implementation of LB through Pod is based on elastic Nic, which is not supported by ordinary network mode, which will be supported in the future.
How do I use it?
Since LB direct connection Pod relies on VPC-CNI, you need to ensure that Pod uses an elastic Nic:
If the VPC-CNI network plug-in is selected when the cluster is created, then the created Pod uses the elastic Nic by default.
If the Global Router network plug-in is selected when the cluster is created, VPC-CNI support is later enabled, that is, the two modes are mixed. The created Pod does not use the elastic Nic by default, and you need to use yaml to create the workload. Specify tke.cloud.tencent.com/networks: tke-route-eni this annotation for the Pod to declare the use of the elastic network card, and add requests and limits like tke.cloud.tencent.com/eni-ip: "1" to one of the containers. Example:
ApiVersion: apps/v1kind: Deploymentmetadata: labels: app: nginx name: nginx-deployment-enispec: replicas: 3 selector: matchLabels: app: nginx template: metadata: annotations: tke.cloud.tencent.com/networks: tke-route-eni labels: app: nginx spec: containers:-image: nginx name: nginx resources: requests: Tke.cloud.tencent.com/eni-ip: "1" limits: tke.cloud.tencent.com/eni-ip: "1"
When you expose a service with LoadBalancer's Service, you need to declare that you use directly connected mode:
If you create a Service through the console, you can select the load balancer directly connected Pod mode:
If you create a Service through yaml, you need to add the annotation of service.cloud.tencent.com/direct-access: "true" to the Service:
ApiVersion: v1kind: Servicemetadata: annotations: service.cloud.tencent.com/direct-access: "true" labels: app: nginx name: nginx-service-enispec: externalTrafficPolicy: Cluster ports:-name: 80-80-no port: 80 protocol: TCP targetPort: 80 selector: app: nginx sessionAffinity: None type: LoadBalancer
When exposing a service using Ingress, you also need to declare the use of directly connected mode:
If you create an Ingress through the console, you can select the load balancer directly connected Pod mode:
If you create an Ingress through yaml, you need to add the annotation of ingress.cloud.tencent.com/direct-access: "true" to the Ingress:
ApiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: annotations: ingress.cloud.tencent.com/direct-access: "true" kubernetes.io/ingress.class: qcloud name: test-ingress namespace: defaultspec: rules:-http: paths:-backend: serviceName: nginx servicePort: 80 path: / / this is the answer to the question about how to use LB to connect to Pod on TKE. I hope the above content can help you to a certain extent, if you still have a lot of doubts to be solved, you can follow the industry information channel to learn more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.