Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

[K8S troubleshooting] clusterIP and service cannot be accessed within the POD of the cluster

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Troubleshooting background: during the deployment of a production environment, the access address configured in the configuration file is the Service of the cluster. After the configuration, it is found that the service cannot be accessed normally, so a busybox is launched for testing. The test shows that in busybox, it can be resolved to IP normally through coredns, and then go to ping service. It is found that neither ping nor ping clusterIP can be connected.

Troubleshooting experience: first of all, I checked whether the kube-proxy is normal and found that the startup is normal, and then it is also restarted, the same ping is not available, and then I checked the network plug-in and restarted flannel, but it still has no effect. Later, I thought of another set of K8s environment, which can ping service normally. By comparing the configuration of these two environments, I found that there is only a little difference in the configuration of kube-proxy in all configurations. Kube-proxy, which can communicate with ping, uses proxy-mode=ipvs, and the environment that cannot ping uses the default mode (iptables).

Iptables does not have a specific device response.

Then, after many tests, add-proxy-mode=ipvs, clear the firewall rules on node, restart kube-proxy, and you will be able to communicate with ping normally.

When learning K8S, I have been ignoring the relevant knowledge of underlying traffic forwarding, that is, IPVS and iptables. I think that no matter which mode, as long as it can forward access to pod, do not pay too much attention to these details, or need to be more careful in the future.

Add: the kubeadm deployment mode is changed from kube-proxy to ipvs mode.

By default, the kube-proxy we deploy can see the following information by looking at the log: Flag proxy-mode= "" unknown,assuming iptables proxy

[root@k8s-master] # kubectl logs-n kube-system kube-proxy-ppdb6 W1013 06 kube-system kube-proxy-ppdb6 55 proxier.go:513 35.773739 1] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting / lib/modulesW1013 06:55:35.868822 1 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting / lib/modulesW1013 06:55:35.869786 1 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting / lib/modulesW1013 06:55:35.870800 1 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting / lib/modulesW1013 06 server_others.go:249 55 server_others.go:249 35.876832 1 Flag proxy-mode= "" unknown Assuming iptables proxyI1013 06 Using iptables Proxier.I1013 55 server.go:534 35.890892 1 conntrack.go:52] Version: v1.15.0I1013 06 55 server.go:534 35.909025 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072I1013 06 55 conntrack.go:52] Setting nf_conntrack_max to 131072I1013 06 55 conntrack 35.919298 1 .go: 83] Setting conntrack hashsize to 32768I1013 06 55 conntrack.go:100 35.945969 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400I1013 06 55 controller_utils 35.946044 1 conntrack.go:100] Set sysctl' net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600I1013 06 55 controller_utils 35.946623 1 controller_utils. Go:1029] Waiting for caches to sync for endpoints config controllerI1013 06:55:35.946695 1 config.go:187] Starting service config controllerI1013 06:55:35.946713 1 controller_utils.go:1029] Waiting for caches to sync for service config controllerI1013 06:55:36.047121 1 controller_utils.go:1036] Caches are synced for endpoints config controllerI1013 06:55:36.047195 1 controller_utils.go:1036] Caches are synced for service config controller

Here we need to modify the configuration file of kube-proxy and add mode to ipvs.

[root@k8s-master] # kubectl edit cm kube-proxy-n kube-system...ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1 mode 10249 mode: "ipvs".

What you need to pay attention to in ipvs mode is to add ip_vs related modules:

Cat > / etc/sysconfig/modules/ipvs.modules

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report