In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
It is believed that many inexperienced people have no idea about what RoutingMesh--Ingress load balancing is in docker. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
We know that the communication between containers, such as 10.0.9.3 and 10.0.9.5 in the figure above, is over the overlay network and is achieved through a VXLAN tannel.
But the communication between service and service is achieved through VIP. For example, client's service communicates with web's service, while web has a scale, so client accesses web by accessing virtual IP (VIP). So how does VIP map to a specific 10.0.9.5 or 10.0.9.6? This is done through LVS.
What is LVS?
LVS,Linux Virtual Server . Load balancing can be achieved at the system level.
We can access wordpress at port 80 on any of the three nodes, and this implementation is what IngressNetWork does. When any swarm node accesses the port service, it will IPVS (IP Virtual Service) through this node through the port service, and give loadbanlance to the real service via LVS. For example, in the figure above, we visit Docker Host3
Forward to the other two nodes.
Our experimental environment is the same as in the previous section, we changed the scale of whoami to 2.
Iie4bu@swarm-manager:~$ docker service lsID NAME MODE REPLICAS IMAGE PORTSh5wlczp85sw5 client replicated 1/1 busybox:1.28.3 9i6wz6cg4koc whoami replicated 3/3 jwilder/whoami:latest *: 8000-> 8000 docker service scale whoami=2whoami scaled to 2overall progress docker service scale whoami=2whoami scaled to 2overall progress: 2 out of 2 tasks 1 tasks 2: running [= >] 2 Service converged 2: running [= = >] verify:
View the operation of whoami:
Iie4bu@swarm-manager:~$ docker service ps whoamiID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS6hhuf528spdw whoami.1 jwilder/whoami:latest swarm-manager Running Running 17 hours ago 9idgk9jbrlcm whoami.3 jwilder/whoami:latest swarm-worker2 Running Running 16 hours ago
Run on swarm-manager and swarm-worker2 nodes, respectively.
To access whoami in swarm-manager:
Iie4bu@swarm-manager:~$ curl 127.0.0.1:8000I'm cc9f97cc5056iie4bu@swarm-manager:~$ curl 127.0.0.1:8000I'm f47e05019fd9
To access whoami in swarm-worker1:
Iie4bu@swarm-worker1:~$ curl 127.0.0.1:8000I'm f47e05019fd9iie4bugs color worker1purl $curl 127.0.0.1:8000I'm cc9f97cc5056
Why is it that there is no whoami service locally for swarm-worker1, so it can access port 8000?
You can see the local forwarding rules through iptables:
Iie4bu@swarm-worker1:~$ sudo iptables-nL-t nat [sudo] password for iie4bu: Chain PREROUTING (policy ACCEPT) target prot opt source destination DOCKER-INGRESS all-- 0.0.0.0 target prot opt source destination DOCKER-INGRESS all 0 0.0.0.0 target prot opt source destination DOCKER-INGRESS all 0 ADDRTYPE match dst-type LOCALDOCKER all-0.0.0.0 target prot opt source destination DOCKER-INGRESS all 0 0.0.0.0 ADDRTYPE match dst-type LOCALChain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination DOCKER-INGRESS all-0.0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 ADDRTYPE match dst-type LOCALDOCKER all. .0.0.0 / 8 ADDRTYPE match dst-type LOCALChain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all-0.0.0.0It 0 0.0.0.0 MASQUERADE all 0 ADDRTYPE match src-type LOCALMASQUERADE all-172.17.0.0Universe 16 0.0.0.0amp 0 MASQUERADE all-172. 19.0.0 tcp dpt:443MASQUERADE tcp 16 0.0.0.0 MASQUERADE all-- 172.18.0.0 tcp 16 0.0.0.0 tcp-- 172.17.0.2 172.17.0.2 tcp dpt:443MASQUERADE tcp-- 172.17.0.2 172.17.0.2 Dpt:80MASQUERADE tcp-- 172.17.0.2 172.17.0.2 tcp dpt:22Chain DOCKER (2 references) target prot opt source destination RETURN all-- 0.0.0.0Uniplex 0 0.0.0.0Uniplex 0 RETURN all-- 0.0.0.0Uniplex 0 0.0 .0.0Accord 0 RETURN all-0.0.0.0Universe 0 0.0.0.0Uniplex 0 DNAT tcp-- 0.0.0.0Uniplex 0.0.0.0.0Uniplex 0 tcp dpt:443 to:172.17.0.2:443DNAT tcp-- 0.0.0.0Uniplex 0 0. 0. 0. 0 references 0 tcp dpt:81 to:172.17.0.2:80DNAT tcp-0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 references 0 target prot opt source destination DNAT tcp-0. 0. 0 .0Accord0 0.0.0.0Universe 0 tcp dpt:8000 to:172.19.0.2:8000RETURN all-- 0.0.0.0Universe 0.0.0.0.0Universe 0
We see DOCKER-INGRESS, and its forwarding rule is that if we access the tcp8000duank port, it will be forwarded to 172.19.0.2 8000, so what is this 172.19.0.2 8000?
Let's take a look at the local ip:
Iie4bu@swarm-worker1:~$ ifconfigbr-3f2fc691f5da Link encap:Ethernet HWaddr 02:42:c8:f4:03:ad inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen : 0 RX bytes:0 (0.0B) TX bytes:0 (0.0B) docker0 Link encap:Ethernet HWaddr 02:42:43:69:b7:60 inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:43ff:fe69:b760/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX Packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:83 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0B) TX bytes:9208 (9.2KB) docker_gwbridge Link encap:Ethernet HWaddr 02:42:41:bf:4d:15 inet addr:172.19.0.1 Bcast:172.19.255.255 Mask:255.255. 0.0 inet6 addr: fe80::42:41ff:febf:4d15/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:42 errors:0 dropped:0 overruns:0 frame:0 TX packets:142 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3556 (3.5 KB) TX bytes:13857 (13.8 KB).
We can see that the local network is docker_gwbridge, and its ip is 172.19.0.1, which is on the same network segment as 172.19.0.2. So we can guess that 172.19.0.2 must be a network connected to docker_gwbridge. You can view it through brctl show:
Iie4bu@swarm-worker1:~$ brctl showbridge name bridge id STP enabled interfacesbr-3f2fc691f5da 8000.0242c8f403ad no docker0 8000.02424369b760 no veth75a496ddocker_gwbridge 8000.024241bf4d15 no veth500f4b4 veth54af5f8
You can see that docker_gwbridge has two interface. Which one of these two interface is it?
Through docker network ls:
Iie4bu@swarm-worker1:~$ docker network lsNETWORK ID NAME DRIVER SCOPEbdf23298113d bridge bridge local969e60257ba5 docker_gwbridge bridge localcdcffe1b31cb host host localuz1kgf9j6m48 ingress overlay swarmcojus8blvkdo my-demo overlay Swarm3f2fc691f5da network_default bridge localdba4587ee914 none null local
View the details of docker_gwbridge:
Iie4bu@swarm-worker1:~$ docker network inspect docker_gwbridge [{"Name": "docker_gwbridge", "Id": "969e60257ba50b070374d31ea43a0550d6cd3ae3e68623746642fe8736dee5a4", "Created": "2019-04-08T09:47:59.343371327+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": {"Driver": "default" "Options": null, "Config": [{"Subnet": "172.19.0.0 Attachable 16", "Gateway": "172.19.0.1"}]}, "Internal": false, "Attachable": false "Ingress": false, "ConfigFrom": {"Network": "}," ConfigOnly ": false," Containers ": {" 5559895ccaea972dad4f2fe52c0ec754d2d7c485dceb35083719768f611552e7 ": {" Name ":" gateway_bf5031da0049 "," EndpointID ":" 6ad54f228134798de719549c0f93c804425beb85dd97f024408c4f7fc393fdf9 " "MacAddress": "02:42:ac:13:00:03", "IPv4Address": "172.19.0.3lap16", "IPv6Address": "}," ingress-sbox ": {" Name ":" gateway_ingress-sbox "," EndpointID ":" 177757eca7a18630ae91c01b8ac67bada25ce1ea050dad4ac5cc318093062003 " "MacAddress": "02:42:ac:13:00:02", "IPv4Address": "172.19.0.2 false 16", "IPv6Address": ""}}, "Options": {"com.docker.network.bridge.enable_icc": "false" "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.name": "docker_gwbridge"}, "Labels": {}}]
You can see that there are two container connected to docker_gwbridge, gateway_bf5031da0049 and gateway_ingress-sbox, and the ip of gateway_ingress-sbox is 172.19.0.2. In other words, the data is forwarded to this gateway_ingress-sbox network namespace.
Enter gateway_ingress-sbox:
Iie4bu@swarm-worker1:~$ sudo ls / var/run/docker/netns1-cojus8blvk 1-uz1kgf9j6m 44e6e70b2177 b1ba5b4dd9f2 bf5031da0049 ingress_sboxiie4bu@swarm-worker1:~$ sudo nsenter-net=//var/run/docker/netns/ingress_sbox-bash: / home/iie4bu/anaconda3/etc/profiel.d/conda.sh: No such file or directoryCommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.If your shell is Bash or a Bourne variant, enable conda for the current user with $echo ". / home/iie4bu/anaconda3/etc/profile.d/conda.sh "> > ~ / .bashrcor, for all users, enable conda with $sudo ln-s / home/iie4bu/anaconda3/etc/profile.d/conda.sh / etc/profile.d/conda.shThe options above will permanently enable the 'conda' command, but they do NOTput conda's base (root) environment on PATH. To do so, run $conda activatein your terminal, or to put the base environment on PATH permanently, run $echo "conda activate" > > ~ / .bashrcPrevious to conda 4.4, the recommended way to activate conda was to modify PATH inyour ~ / .bashrc file. You should manually remove the line that looks like export PATH= "/ home/iie4bu/anaconda3/bin:$PATH" ^ ^ The above line should NO LONGER be in your ~ /. Bashrc file! ^ ^ root@swarm-worker1:~#
This brings you into ingress_sbox 's namespace. View ip
Root@swarm-worker1:~# ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever13: eth0@if14: mtu 1450 qdisc noqueue state UP group default link/ether 02:42:0a:ff:00:03 brd ff:ff:ff:ff:ff:ff link- Netnsid 0 inet 10.255.0.3/16 brd 10.255.255.255 scope global eth0 valid_lft forever preferred_lft forever inet 10.255.0.168/32 brd 10.255.0.168 scope global eth0 valid_lft forever preferred_lft forever15: eth2@if16: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet 172.19. 0.2/16 brd 172.19.255.255 scope global eth2 valid_lft forever preferred_lft forever
It is found that ip is 172.19.0.2.
In order to disguise LVS, we quit ingress_sbox 's namespace. Install ipvsadm in swarm-worker1, which is an administrative tool for lvs.
Iie4bu@swarm-worker1:~$ sudo apt-get install ipvsadm
After the installation is successful, enter ingress_sbox and enter the command iptables-nL-t mangle
Root@swarm-worker1:~# iptables-nL-t mangleChain PREROUTING (policy ACCEPT) target prot opt source destination MARK tcp-0.0.0.0 target prot opt source destination MARK all 0 0.0.0.0 target prot opt source destination MARK all 0 tcp dpt:8000 MARK set 0x100Chain INPUT (policy ACCEPT) / 0 10.255.0.168 MARK set 0x100Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination
One line of MARK represents load balancing.
Enter the command ipvsadm-l
Root@swarm-worker1:~# ipvsadm-lIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnFWM 256 rr-> 10.255.0.5 Masq 0 Masq 100-> 10.255.0.7 size=4096 0 Masq 100 0
10.255.0.5 10.255.0.7 is the service address of whoami.
View whoami on swarm-manager:
Iie4bu@swarm-manager:~$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMEScc9f97cc5056 jwilder/whoami:latest "/ app/http" 19 hours ago Up 19 hours 8000/tcp whoami.1.6hhuf528spdw9j9pla7l3tv3tiie4bu@swarm-manager:~$ docker exec cc9 ip A1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 10.255.0.168/32 brd 10.255.0.168 scope global lo valid_lft forever preferred_lft forever inet 10.0.2.5/32 brd 10.0.2.5 scope global lo valid_lft forever preferred_lft forever18: Eth0@if19: mtu 1450 qdisc noqueue state UP link/ether 02:42:0a:ff:00:05 brd ff:ff:ff:ff:ff:ff inet 10.255.0.5/16 brd 10.255.255.255 scope global eth0 valid_lft forever preferred_lft forever20: eth2@if21: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff inet 172.18.0. 3/16 brd 172.18.255.255 scope global eth2 valid_lft forever preferred_lft forever23: eth3@if24: mtu 1450 qdisc noqueue state UP link/ether 02:42:0a:00:02:07 brd ff:ff:ff:ff:ff:ff inet 10.0.2.7/24 brd 10.0.2.255 scope global eth3 valid_lft forever preferred_lft foreveriie4bu@swarm-manager:~$
Whoami's IP is exactly 10.255.0.5.
View the following on swarm-worker2:
Iie4bu@swarm-worker2:~$ docker container lsCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESf47e05019fd9 jwilder/whoami:latest "/ app/http" 20 hours ago Up 20 hours 8000/tcp whoami.3.9idgk9jbrlcm3ufvkmbmvv2t8633ddfc082b9 busybox:1.28.3 "sh-c 'while true;..." 20 hours ago Up 20 hours client.1.3iv3gworpyr5vdo0h9eortlw0iie4bu@swarm-worker2:~$ docker exec-it f47 ip A1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00 brd 00 brd 00 00 inet 127.0.1 1 scope host lo valid_lft forever preferred_lft forever inet 10.255.0.168 32 brd 10.255 . 0.168 scope global lo valid_lft forever preferred_lft forever inet 10.0.2.5/32 brd 10.0.2.5 scope global lo valid_lft forever preferred_lft forever24: eth3@if25: mtu 1450 qdisc noqueue state UP link/ether 02:42:0a:00:02:0d brd ff:ff:ff:ff:ff:ff inet 10.0.2.13/24 brd 10.0.2.255 scope global eth3 valid_lft forever Preferred_lft forever26: eth2@if27: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:12:00:04 brd ff:ff:ff:ff:ff:ff inet 172.18.0.4/16 brd 172.18.255.255 scope global eth2 valid_lft forever preferred_lft forever28: eth0@if29: mtu 1450 qdisc noqueue state UP link/ether 02:42:0a:ff:00:07 brd ff:ff:ff:ff:ff:ff inet 10 .255.0.7 / 16 brd 10.255.255.255 scope global eth0 valid_lft forever preferred_lft forever
The IP of whoami is exactly 10.255.0.7.
So when our packet enters ingress_sbox and makes a load balance through lvs, that is to say, when we access port 8000, it forwards the packet to 10.255.0.5 and 10.255.0.7 as a load. It will then go into the swarm node.
After reading the above, have you mastered the method of RoutingMesh--Ingress load balancing in docker? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.