In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to get the real IP of the client in Kubernetes Pod, which has a certain reference value, and interested friends can refer to it. I hope you can learn a lot after reading this article.
Kubernetes relies on kube-proxy components to achieve Service communication and load balancing. In this process, due to the use of SNAT to translate the source address, the service in Pod can not get the real client IP address information. This article mainly answers the question of how to obtain the real IP address of the client in the Kubernetes cluster.
Create a backend service selection
Here, select containous/whoami as the backend service image. On the introduction page of Dockerhub, you can see that the relevant information about the client is returned when you access its port 80. In the code, we can get this information in the Http header.
Hostname: 6e0030e67d6aIP: 127.0.0.1IP:: 1IP: 172.17.0.27IP: fe80::42:acff:fe11:1bGET / HTTP/1.1Host: 0.0.0.0:32769User-Agent: curl/7.35.0Accept: * / * Cluster environment
A brief introduction to the status of the cluster. The cluster has three nodes, one master and two worker nodes. As shown below:
Create a service
Create enterprise space, project
As shown in the following figure, the enterprise space and project are named realip
Create a service
Here, create a stateless service, select containous/whoami image, and use the default port.
Change the service to NodePort mode
Edit the public network access mode of the service and change it to NodePort mode.
Look at the NodePort port of the access service and find that the port is 31509.
Access servic
When the browser opens EIP +: 31509 of the Master node, it returns the following:
Hostname: myservice-fc55d766-9ttxtIP: 127.0.0.1IP: 10.233.70.42RemoteAddr: 192.168.13.4:21708GET / HTTP/1.1Host: dev.chenshaowen.com:31509User-Agent: Chrome/86.0.4240.198 Safari/537.36Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9Accept-Encoding: gzip, deflateAccept-Language: zh-CN,zh Qcookie: lang=zh;Dnt: 1Upgrade-Insecure-Requests: 1
You can see that RemoteAddr is the IP of the Master node, not the real IP address of the access client. The Host here refers to the address of the access portal. For quick access, I use the domain name, which does not affect the test results.
Get the real IP directly through NortPort access
In the above access, the reason why the real IP of the client cannot be obtained is that SNAT changes the source IP that accesses the SVC. Changing the externalTrafficPolicy of the service to Local mode can solve this problem.
Open the configuration editing page for the service
Set the externalTrafficPolicy of the service to Local mode.
To access the service, you can get the following content:
Hostname: myservice-fc55d766-9ttxtIP: 127.0.0.1IP: 10.233.70.42RemoteAddr: 139.198.254.11:51326GET / HTTP/1.1Host: dev.chenshaowen.com:31509User-Agent: hrome/86.0.4240.198 Safari/537.36Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9Accept-Encoding: gzip, deflateAccept-Language: zh-CN,zh Qscene 0.9, keep-aliveCookie, lang=zh;Dnt, 1Upgrade-Insecure-Requests: 1
Cluster hides the client source IP, which may cause a second hop to another node, but has a good overall load distribution. Local retains the client source IP and avoids the second hop of LoadBalancer and NodePort type services, but there is a potential risk of uneven traffic propagation.
The following is a comparison diagram:
When the request falls to a node that does not have a serving Pod, it will not be accessible. When accessing with curl, it will pause at TCP_NODELAY all the time, and then prompt for timeout:
* Trying 139.198.112.248.. * TCP_NODELAY set* Connection failed* connect to 139.198.112.248 port 31509 failed: Operation timed out* Failed to connect to 139.198.112.248 port 31509: Operation timed out* Closing connection 0 obtain real IP through LB-> Service access
In a production environment, there are usually multiple nodes receiving client traffic at the same time, and using only Local mode will result in lower service accessibility. The purpose of introducing LB is to take advantage of its characteristics of probing, and only forward traffic to the nodes where there is a service Pod.
Take Qingyun's LB as an example to demonstrate. In the control of Qingyun, you can create LB, add listeners, and listen to port 31509. You can refer to the usage document of LB (https://docs.qingcloud.com/product/network/loadbalancer/)), which will not be discussed here.
As can be seen in the following figure, only the master node is active at port 31509 of the service, and the traffic is only directed to the master node, as expected.
Then continue to increase the number of copies to 3.
Unfortunately, Pod is not evenly distributed across three nodes, two of which are on master. Therefore, the back-end node of the LB is not fully lit. As shown below:
This requires adding an anti-affinity description to deploy. There are two options. The first is to configure a soft policy, but it does not guarantee that all LB backends are lit and evenly distributed to traffic.
Spec: template: metadata: labels: app: myservice spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution:-weight: 100podAffinityTerm: labelSelector: matchExpressions:-key: app operator: In values: -myservice topologyKey: kubernetes.io/hostname
The other is to configure a hard policy that forces Pod to be allocated on different nodes, but limits the number of copies, that is, the total number of Pod cannot exceed the total number of Node.
Spec: template: metadata: labels: app: myservice spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: matchExpressions:-key: app operator: In values:-myservice topologyKey: kubernetes.io/hostname
Use the hard policy configuration to finally light up all the backends, as shown below:
Get the real IP by accessing LB-> Ingress-> Service
If each service occupies a LB, the cost is high, and the configuration is not flexible enough. Every time you add a service, you need to go to LB to add a new port mapping.
Another option is that LB directs 80,443% of the traffic to Ingress Controller, then forwards the traffic to Service, and then reaches the service in Pod.
At this point, LB is required to do transparent transmission in TCP layer, or forwarding with real IP in HTTP layer. Set the externalTrafficPolicy of Ingress Controller to Local mode, while Service does not have to be set to Local mode.
If you want to improve accessibility, you can also refer to the above configuration of anti-affinity to ensure that there is an Ingress Controller on each back-end node.
Forwarding path of traffic:
LB (80max 443)-> Ingress Controller (30000)-> myservice (80)-> myservice-fc55d766-xxxx (80)
First, check the configuration of LB [get client IP].
Then open the public network access gateway of the project.
Then add the route for the service
Finally, go to * * platform Management * *-> * * Cluster Management * *, enter the cluster, and find the gateway corresponding to the realip project in the system project kubesphere-controls-system.
Edit the configuration file of the service and change externalTrafficPolicy to Local mode.
To access the service, you can get the following content:
Hostname: myservice-7dcf6b965f-vv6mdIP: 127.0.0.1IP: 10.233.96.152RemoteAddr: 10.233.70.68:34334GET / HTTP/1.1Host: realip.dev.chenshaowen.comUser-Agent: Chrome/87.0.4280.67 Safari/537.36Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9Accept-Encoding: gzip, deflateAccept-Language: zh-CN,zh Max-age=0Cookie: _ ga=GA1.2.896113372.1605489938; _ gid=GA1.2.863456118.1605830768Cookie: lang=zh Upgrade-Insecure-Requests: 1X-Forwarded-For: 139.198.113.75X-Forwarded-Host: realip.dev.chenshaowen.comX-Forwarded-Port: 443X-Forwarded-Proto: httpsX-Original-Uri: / X-Real-Ip: 139.198.113.75X-Request-Id: 999fa36437a1180eda3160a1b9f495a4X-Scheme: https Thank you for reading this article carefully. I hope the article "how to get the Real IP on the client side in Kubernetes Pod" shared by the editor will be helpful to you. At the same time, I also hope that you will support and pay attention to the industry information channel, and more related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.