In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly shows you the "docker network plug-in flannel how to use", the content is easy to understand, clear, hope to help you solve doubts, the following let the editor lead you to study and learn "how to use the network plug-in flannel in docker" this article.
Cross-node communication needs to be done through NAT, that is, source address translation.
K8s network communication:
1) Communication between containers: communication between multiple containers in the same pod can be realized through lo
2) Communication between pod, pod ip pod ip,pod and pod can communicate without any conversion.
3) pod and service communication: pod ip cluster ip (that is, service ip) pod ip, they communicate through iptables or ipvs. In addition, we should note that ipvs can not replace iptables, because ipvs can only do load balancing, not nat conversion.
4) Communication between Service and external clients of the cluster
[root@master pki] # kubectl get configmap-n kube-systemNAME DATA AGEcoredns 1 22dextension-apiserver-authentication 6 22dkube-flannel-cfg 2 22dkube-proxy 2 22dkubeadm-config 1 22dkubelet-config-1.11 1 22dkubernetes-dashboard-settings 1 9h [root@master pki] # kubectl get configmap kube-proxy-o yaml-n kube-systemmode: ""
Seeing that mode is empty, we can change it to ipvs.
K8s relies on CNI interface to connect with other plug-ins to realize network communication. At present, the more popular plug-in is flannet,callco,canel,kube-router.
The solutions used by these plug-ins are as follows:
1) Virtual bridges, virtual network cards, and multiple containers share a virtual network card for communication.
2) Multiplexing: MacVLAN, where multiple containers share a physical network card for communication
3) hardware switching: SR-LOV, a physical network card can virtualize multiple interfaces, this performance is the best.
Location of CNI plug-in
[root@master ~] # cat / etc/cni/net.d/10-flannel.conflist {"name": "cbr0", "plugins": [{"type": "flannel", "delegate": {"hairpinMode": true, "isDefaultGateway": true}}, {"type": "portmap" "capabilities": {"portMappings": true}]}
Flanel only supports network communication, but does not support network policies.
Both callco network communications and network policies are supported.
The combined function of canel:flanel+callco.
We can deploy flanel to provide network communication, and then deploy a callco to provide only network strategy. Instead of canel.
Mtu: refers to the maximum big data packet size that can be passed above a certain layer of a communication protocol.
[root@master ~] # ifconfig cni0: flags=4163 mtu 1450 inet 10.244.0.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::4097:d5ff:fe28:6b64 prefixlen 64 scopeid 0x20 ether 0a:58:0a:f4:00:01 txqueuelen 1000 (Ethernet) RX packets 1609844 bytes 116093191 (110.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1632952 bytes 577989701 (551.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0docker0: flags=4099 mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:83:f8:b8:ff txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0B) TX errors 0 dropped 0 Overruns 0 carrier 0 collisions 0ens192: flags=4163 mtu 1500 inet 172.16.1.100 netmask 255.255.255.0 broadcast 172.16.1.255 inet6 fe80::9cf3:d9de:59f:c320 prefixlen 64 scopeid 0x20 inet6 fe80::5707:6115:267b:bff5 prefixlen 64 scopeid 0x20 inet6 fe80::e34:f952:2859:4c69 prefixlen 64 scopeid 0x20 ether 00:50:56:a2:4e:cb txqueuelen 1000 (Ethernet) RX packets 5250378 bytes 704067861 (671.4 MiB) RX errors 139 dropped 190 overruns 0 frame 0 TX packets 4988169 bytes 4151179300 (3.8GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0flannel.1: flags=4163 mtu 1450 inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::a82c:bcff:fef8:895c prefixlen 64 scopeid 0x20 ether aa:2c:bc:f8:89:5c txqueuelen 0 (Ethernet ) RX packets 51 bytes 3491 RX errors 0 dropped 0 overruns 0 frame 0 TX packets 53 bytes 5378 (5.2 KiB) TX errors 0 dropped 10 overruns 0 carrier 0 collisions 0lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6:: 1 prefixlen 128 scopeid 0x10 loop txqueuelen 1 (Local Loopback) RX packets 59118846 bytes 15473986573 (14.4 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 59118846 bytes 15473986573 (14.4 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0veth7ec94aab: flags=4163 mtu 1450 inet6 fe80::487d:5bff:fef7:484d prefixlen 64 scopeid 0x20 ether 4a:7d:5b:f7:48:4d txqueuelen 0 (Ethernet) RX packets 88112 bytes 19831802 (18.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 105718 bytes 13343894 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0vethf703483a: flags=4163 mtu 1450 inet6 fe80::b06a:eaff:fec3:33a8 prefixlen 64 scopeid 0x20 ether b2:6a:ea:c3:33:a8 txqueuelen 0 (Ethernet) RX packets 760882 bytes 59400960 (56.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 763263 bytes 282299805 (269.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0vethff579703 : flags=4163 mtu 1450 inet6 fe80::d82f:37ff:fe9a:b6d0 prefixlen 64 scopeid 0x20 ether da:2f:37:9a:b6:d0 txqueuelen 0 (Ethernet) RX packets 760850 bytes 59398245 (56.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 764016 bytes 282349248 (269.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Through the ifconfig command, we can see that the address of flannel.1 is 10.244.0.0, the subnet mask is 255.255.255.255, MTU is 1450, MTU needs to set aside part of it for encapsulation and overlay, and the extra overhead is used.
Cni0 appears only when pod is running.
Pod on both nodes can communicate through flannel tunnels. The VxLAN protocol is used by default, and its performance is a bit low because of its extra overhead.
The second flannel protocol is called host-gw (host gateway), which means that Node nodes use their network interfaces as gateways for pod, thus enabling node on different nodes to communicate. This performance is higher than VxLAN because it has no extra overhead. However, he has the disadvantage that all node nodes must be in the same network segment.
In addition, if the two pod nodes are in the same network segment, the VxLAN can also support the function of host-gw, that is, it can be routed and forwarded directly through the gateway of the physical network card instead of overlaying the tunnel flannel, thus improving the performance of the VxLAN. The function of this flannel is called directrouting.
[root@master] # kubectl get daemonset-n kube-systemNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEkube-flannel-ds-amd64 3 3 3 beta.kubernetes.io/arch=amd64 22d [root@master] # kubectl get pods-n kube-system-o wideNAME READY STATUS RESTARTS AGE IP NODEkube-flannel-ds-amd64-6zqzr 1 node1kube-flannel-ds-amd64-kpctn 1 Running 8 22d 172.16.1.100 masterkube-flannel-ds-amd64-7qtcl 1 Running 7 22d 172.16.1.101 node1kube-flannel-ds-amd64-kpctn 1/1 Running 6 22d 172.16.1.102 node2
You can see that flannel is running as pod's daemonset controller (actually flannel can also run as a daemon).
[root@master ~] # kubectl get configmap-n kube-systemNAME DATA AGEkube-flannel-cfg 2 22d [root@master ~] # kubectl get configmap-n kube-system kube-flannel-cfg-o json-n kube-system\\ "10.244.0.0bat 16\\" \\ n\\ "Backend\\": {\ n\\ "Type\\":\\ "vxlan\"
Configuration parameters of flannel:
1. The network address in CIDR format used by network:flannel, which is used to configure network functions for pod.
1) 10.244.0.0amp 16muri->
Master: 10.244.0.0./24
Node01: 10.244.1.0/24
....
Node255: 10.244.255.0/24
Can support 255 nodes
2) 10.0.0.0 Universe 8
10.0.0.0/24
...
10.255.255.0/24
Can support more than 60,000 nodes
2. SubnetLen: when dividing network into subnets for each node to use, how long mask will be used for segmentation. Default is 24 bits.
3. SubnetMin: specify the minimum number of address fields in the subnet that can be allocated to subnets. For example, 10.244.10.10.According to 24 can be limited, so that 0room9 is not allowed.
4. SubnetMax: indicates how many can be used at most, such as 10.244.100.0swap 24
5. Backend: Vxlan,host-gw,udp (slowest)
Flannel
Support multiple backends
Vxlan
1.valan
2.Dirextrouting
Host-gw:Host Gateway # is not recommended, only in layer 2 networks, and does not support cross-network. If there are thousands of Pod, it is easy to produce broadcast storm.
UDP: poor performanc
[root@master] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEmyapp-deploy-69b47bc96d-79fqh 1 Running 4 7d 10.244.1.97 node1myapp-deploy-69b47bc96d-tc54k 1 Running 47 d 10.244. 2.88 node2 [root@master ~] # kubectl exec-it myapp-deploy-69b47bc96d-79fqh-/ bin/sh/ # ping 10.244.2.88 # ping ipPING 10.244.2.88 (10.244.2.88): 56 data bytes64 bytes from 10.244.2.88: seq=0 ttl=62 time=0.459 ms64 bytes from 10.244.2.88: seq=0 ttl=62 time=0.377 ms64 bytes from 10.244.2.88: seq=1 ttl=62 time=0.252 ms64 bytes From 10.244.2.88: seq=2 ttl=62 time=0.261 ms
Grab the package on other nodes and find that you can't catch the package on ens192 at all.
[root@master ~] # tcpdump-I ens192-nn icmp [root@master ~] # yum install bridge-utils-y [root@master ~] # brctl show docker0bridge namebridge idSTP enabledinterfacesdocker08000.024283f8b8ffno [root@master ~] # brctl show cni0bridge namebridge idSTP enabledinterfacescni08000.0a580af40001noveth7ec94aabvethf703483avethff579703
You can see that veth these interfaces are bridged to the cni0.
Brctl show means to view an existing bridge.
[root@node1 ~] # tcpdump-I cni0-nn icmptcpdump: verbose output suppressed, use-v or-vv for full protocol decodelistening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes23:40:11.370754 IP 10.244.1.97 > 10.244.2.88: ICMP echo request, id 4864, seq 96, length 64232340Groupe 11.370988 IP 10.244.2.88 > 10.244.1.97: ICMP echo reply, id 4864, seq 96 Length 6423 IP 40 ICMP echo request 12.370888 IP 10.244.1.97 > 10.244.2.88: ICMP echo request, id 4864, seq 97, length 642340 id 12.371090 IP 10.244.2.88 > 10.244.1.97: ICMP echo reply, id 4864, seq 97, length 64 ^ X23: 40 ICMP echo request 13.371015 IP 10.244.1.97 > 10.244.2.88: ICMP echo request, id 4864, seq 98 Length 6423 id 4013. 371239 IP 10.244.2.88 > 10.244.1.97: ICMP echo reply, id 4864, seq 98, length 6423 id 4014. 371128 IP 10.244.1.97 > 10.244.2.88: ICMP echo request, id 4864, seq 99, length 64
As you can see, on the node node, you can catch the package of Ping in the container on the cni0 port.
In fact, the data flow of the above ping is first in from cni0, then out from flannel.1, and finally sent out with the help of physical network card ens32. So, we can also catch bags on flannel.1:
[root@node1] # tcpdump-I flannel.1-nn icmptcpdump: verbose output suppressed, use-v or-vv for full protocol decodelistening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes03:12:36.823315 IP 10.244.1.97 > 10.244.2.88: ICMP echo request, id 4864, seq 12840, length 6403 Vista 12ve36.823496 IP 10.244.2.88 > 10.244.1.97: ICMP echo reply, id 4864, seq 12840 Length 6403 IP 10.244.1.97 > 10.244.2.88: ICMP echo request, id 4864, seq 12841, length 6403 id 12 id 10.244.2.88 > 10.244.1.97: ICMP echo reply, id 4864, seq 12841, length 64
Similarly, packets can also be captured on the ens192 physical Nic:
[root@node1] # tcpdump-I ens192-nn host 172.16.1.102 # 172.16.1.102 is the physical iptcpdump of node2: verbose output suppressed, use-v or-vv for full protocol decodelistening on ens192, link-type EN10MB (Ethernet), capture size 262144 bytes10:59:24.234174 IP 172.16.1.101.60617 > 172.16.1.102.8472: OTV, flags [I] (0x08), overlay 0, instance 1IP 10.244.1.97 > 10.244.2.88: ICMP echo request, id 7168 Seq 0, length 6410 IP 5924. 234434 IP 172.16.1.102.54894 > 172.16.1.101.8472: OTV, flags [I] (0x08), overlay 0, instance 1IP 10.244.2.88 > 10.244.1.97: ICMP echo reply, id 7168, seq 0, length 6410 IP 5925.234301 IP 172.16.1.101.60617 > 172.16.1.102.8472: OTV, flags [I] (0x08), overlay 0 Instance 1IP 10.244.1.97 > 10.244.2.88: OTV, flags [I] (0x08), overlay 0, instance 1IP 10.244.2.88 > 10.244.1.97: ICMP echo reply, id 7168, seq 1, length 6410 seq 59seq 26.234415 IP 172.1.101.60617 > 172.16.1.101.60617 > 172.16.1.102.8472 Flags [I] (0x08), overlay 0, instance 1IP 10.244.1.97 > 10.244.2.88: ICMP echo request, id 7168, seq 2, length 6410 Paradig59Parade 26.234592 IP 172.16.1.102.54894 > 172.16.1.101.8472: OTV, flags [I] (0x08), overlay 0, instance 1IP 10.244.2.88 > 10.244.1.97: ICMP echo reply, id 7168, seq 2 Length 6410 IP 172.16.1.101.60617 > 172.16.1.102.8472: OTV, flags [I] (0x08), overlay 0, instance 1IP 10.244.1.97 > 10.244.2.88: ICMP echo request, id 7168, seq 3, length 64
Next, let's change the communication mode of flannel to directrouting.
[root@master flannel] # cd / root/manifests/flannel [root@master flannel] # kubectl edit configmap kube-flannel-cfg-n kube-system {"Network": "10.244.0.0Comp16", "Backend": {"Type": "vxlan" "Directrouting": true # plus this}} [root@master flannel] # ip route showdefault via 172.16.1.254 dev ens192 proto static metric 100 10.244.0.0 dev cni0 proto kernel scope link src 24 dev cni0 proto kernel scope link src 10.244.0.1 # access 10.244.0.0 dev flannel.1 onlink 24 via 10.244.1.0 dev flannel.1 onlink # 10.244.1.0 is the address configured on the flannel Indicates that the access to 10.244.1.0 dev flannel.1 onlink 24 is sent through 10.244.1.0 on the native flannel.1. The same as 10.244.2.0 dev flannel.1 onlink # 10.244.2.0 is the address 172.16.1.0 dev ens192 proto kernel scope link src 24 dev ens192 proto kernel scope link src 172.16.1.100 metric 100 configured on flannel
[root@master flannel] # kubectl get configmap kube-flannel-cfg-o json-n kube-system
"net-conf.json": "{\ n\" Network\ ":\" 10.244.0.0 Network 16\ ",\ n\" Backend\ ": {\ n\" Type\ ":\" vxlan\ ",\ n\" Directrouting\ ": true\ n}\ n}\ n"
When you see Directrouting, it works.
Restart the entire k8s, and then take a look:
[root@master ~] # ip route showdefault via 172.16.1.254 dev ens192 proto static metric 10010.244.0.0Access 24 dev cni0 proto kernel scope link src 10.244.0.1 # access to the machine is forwarded directly on the local machine without the need for other interfaces. This is directrouting10.244.1.0/24 via 172.16.1.101 dev ens192 # see that the access is now 10.244.1.0 and sent out through 172.16.1.101 on the local physical network card ens192 That is, it communicates through the physical network card instead of through the tunnel flannel. 10.244.2.0/24 via 172.16.1.102 dev ens192 172.16.1.0/24 dev ens192 proto kernel scope link src 172.16.1.100 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
Continue to log in to a pod for ping testing:
[root@master] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEmyapp-deploy-69b47bc96d-75g2b 1 Running 0 12m 10.244.1.124 node1myapp-deploy-69b47bc96d-jwgwm 1 Running 03s 10.244. 2.100 node2 [root@master ~] # kubectl exec-it myapp-deploy-69b47bc96d-75g2b-/ bin/sh/ # ping 10.244.2.100PING 10.244.2.100 (10.244.2.100): 56 data bytes64 bytes from 10.244.2.100: seq=0 ttl=62 time=0.536 ms64 bytes from 10.244.2.100: seq=1 ttl=62 time=0.206 ms64 bytes from 10.244.2.100: seq=2 ttl=62 time=0.206 ms64 bytes from 10.244.2.100: Seq=3 ttl=62 time=0.203 ms64 bytes from 10.244.2.100: seq=4 ttl=62 time=0.210 ms [root@node1 ~] # tcpdump-I ens192-nn icmptcpdump: verbose output suppressed Use-v or-vv for full protocol decodelistening on ens192, link-type EN10MB (Ethernet), capture size 262144 bytes12:31:10.899403 IP 10.244.1.124 > 10.244.2.100: ICMP echo request, id 8960, seq 24, length 6412 IP 10.899546 IP 10.244.2.100 > 10.244.1.124: ICMP echo reply, id 8960, seq 24, length 6412 31purge 11.899505 IP 10.244.1.124 > 10.244.2.100: ICMP echo request, id 8960, seq 25 Length 6412 ICMP echo reply 31 ICMP echo reply 11.899639 IP 10.244.2.100 > 10.244.1.124: ICMP echo reply, id 8960, seq 25, length 64
By grabbing packets, you can see that mutual ping in pod now comes in and out of the physical network card ens192, which is directrouting, which has a higher performance than the default vxlan.
The above is all the contents of the article "how to use flannel in docker". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.