In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
System environment
Manager node: CentOS Linux release 7.4.1708 (Core)
Workr node: CentOS Linux release 7.5.1804 (Core)
Docker version Information
Manager node: Docker version 18.09.4, build d14af54266
Worker node: Docker version 19.03.1, build 74b1e89
Docker Swarm system environment
Manager node: 192.168.246.194
Worker node: 192.168.246.195
Network manager node:# docker network lsNETWOrk ID NAME DRIVER SCOPEe01d59fe00e5 bridge bridge local15426f623c37 host host localdd5d570ac60e none null localworker node:# docker network lsNETWOrk ID NAME DRIVER before creating a docker swarm cluster SCOPE70ed15a24acd bridge bridge locale2da5d928935 host host locala7dbda3b96e8 none null local creates docker swarm cluster initializes docker swarm cluster
Manager node execution: docker swarm init
Worker node execution: docker swarm join-- token SWMTKN-1-0p3g6ijmphmw5xrikh9e3asg5n3yzan0eomnsx1xuvkovvgfsp-enrmg2lj1dejg5igmnpoaywr1 192.168.246.194pur2377
Description ⚠️:
If you forget the docker swarm join command, you can find it using the following command:
(1) for worker node: docker swarm join-token worker
(2) for manager node: docker swarm join-token manager
View cluster node information manager node:# docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSIONhplz9lawfpjx6fpz0j1bevocp MyTest03 Ready Active 19.03.1q5af6b67bmho8z0d7**m2yy5j * mysql-nginx Ready Active Leader 18.09.4 View cluster network information manager node:# docker network lsNETWOrk ID NAME DRIVER SCOPEe01d59fe00e5 bridge bridge local7c90d1bf0f62 docker_gwbridge bridge local15426f623c37 host host local8lyfiluksqu0 ingress overlay swarmdd5d570ac60e none Null localworker node:# docker network lsNETWOrk ID NAME DRIVER SCOPE70ed15a24acd bridge bridge local985367037d3b docker_gwbridge bridge locale2da5d928935 host host local8lyfiluksqu0 ingress overlay swarma7dbda3b96e8 none Null local
Description ⚠️:
At the beginning of docker swarm cluster creation, docker creates two networks for each host besides docker0, namely bridge type (docker_gwbridge bridge) and overlay type (ingress) network, and a transitional namespace ingress_sbox. We can use the following command to build an overlay network on the manager node. The result is as follows:
Docker network create-d overlay uber-svc
Take a look at the docker swarm cluster network of the two hosts manager and worker again:
Manager node:# docker network lsNETWOrk ID NAME DRIVER SCOPEe01d59fe00e5 bridge bridge local7c90d1bf0f62 docker_gwbridge bridge local15426f623c37 host host local8lyfiluksqu0 ingress overlay swarmdd5d570ac60e none null localkzxwwwtunpqe Uber-svc overlay swarm = = > this network is the new uber-svcworker node:# docker network lsNETWOrk ID NAME DRIVER SCOPE70ed15a24acd bridge bridge local985367037d3b docker_gwbridge bridge locale2da5d928935 host host local8lyfiluksqu0 ingress we just built. Overlay swarma7dbda3b96e8 none null local
Description ⚠️:
We will find that there is no uber-svc network on worker node. This is because the network becomes available only when the running container is connected to the overlay network. This delay effective strategy improves the scalability of the network by reducing network carding.
View network namespace information manager node:# ip netns1-8lyfiluksq (id: 0) ingress_sbox (id: 1) worker node:# ip netns1-8lyfiluksq (id: 0) ingress_sbox (id: 1)
Description ⚠️:
(1) since the network namespace files of the container and overlay are no longer under the default / var/run/netns of the operating system, they can only be viewed manually through a soft connection. Ln-s / var/run/docker/netns / var/run/netns.
(2) sometimes the network namespace name of the network is preceded by a sequence number of 1 -, 2 -, and sometimes not. But it does not affect the communication and operation of the network.
View network IPAM (IP Address Management) information
(1) the IPAM (IP Address Management) allocation of ingress network is as follows:
Manager node and worker node are the same: # docker network inspect ingress [{"Name": "ingress", "Id": "8lyfiluksqu09jfdjndhj68hl", "Created": "2019-09-09T17:59:06.326723762+08:00", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": {"Driver": "default" "Options": null, "Config": [{"Subnet": "10.255.0.0 ingress 16", = = > ingress subnet "Gateway": "10.255.0.1" = > ingress gateway}
(2) the overlay built by uber-svc will use the IPAM automatically assigned by docker:
# docker network inspect uber-svc [{"Name": "uber-svc", "Id": "kzxwwwtunpqeucnrhmirg6rhm", "Created": "2019-09-09T10:14:06.606521342Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": {"Driver": "default" "Options": null, "Config": [{"Subnet": "10.0.0.0 Gateway 24", = = > uber-svc subnet "Gateway": "10.0.0.1" = = > uber-svc Gateway} Docker swarm is divided into two cases
(1) Ingress Load Balancing
(2) Internal Load Balancing
Description of ⚠️: this section focuses on the second case of LB, that is, Internal Load Balancing~
Define shell script
Before starting the following practice, let's edit the following two scripts. For the use of scripts, I will give specific examples ~
The first script, dockerNAMESPACEBUBX [[- z $NAMESPACE]]; then ls-1 / var/run/docker/netns/ exit 0fiNAMESPACEFILEGUP varrunpax FILEGUP ${NAMESPACE} if [[!-f $NAMESPACE_FILE]]; then NAMESPACE_FILE=$ (docker inspect-f "{{.NetworkSettings.SandboxKey}}" $NAMESPACE 2 > / dev/null) fiif [!-f $NAMESPACE_FILE]] Then echo "Cannot open network namespace'$NAMESPACE': No such file or directory" exit 1fishiftif [[$#-lt 1]]; then echo "No command specified" exit 1finsenter-- net=$ {NAMESPACE_FILE} $@
Description ⚠️:
(1) the script quickly enters the network namespace of the container and executes the corresponding shell command by specifying the container id, name or namespace.
(2) if no parameters are specified, enumerate all the network namespaces related to the Docker container.
The result of executing the script is as follows:
# sh docker_netns.sh = = > list all network namespaces 1-ycqv46f5tl8402c558c13cingress_sbox# sh docker_netns.sh deploy_nginx_nginx_1 ip r = = > enter to view ip information named deploy_nginx_nginx_1 container default via 172.18.0.1 dev eth0 172.18.0.0 sh docker_netns.sh 8402c558c13c ip 16 dev eth0 proto kernel scope link src 172.18.0.2 # sh docker_netns.sh 8402c558c13c ip r = > enter and view the network namespace as 8402c558c13c container ip Message default via 172.18.0.1 dev eth0 172.18.0.0 dev eth0 proto kernel scope link src 16 dev eth0 proto kernel scope link src 172.18.0.2 the second script, findlinks. Shhul, the second script, FindBinLinkerNETNScriptScriptForm.Candle, dockerlinks netns.shIFINDEXtransactions 1if [- z $IFINDEX] Then for namespace in $($DOCKER_NETNS_SCRIPT); do printf "\ e [1X 31m% s:\ e [0m" $namespace $DOCKER_NETNS_SCRIPT $namespace ip-c-o link printf "" doneelse for namespace in $($DOCKER_NETNS_SCRIPT); do if $DOCKER_NETNS_SCRIPT $namespace ip-c-o link | grep-Pq "^ $IFINDEX:"; then printf "\ e [1" 31m%s:\ e [0m "$namespace $DOCKER_NETNS_SCRIPT $namespace ip-c-o link | grep-P" ^ $IFINDEX: "; printf"fi donefi"
The script looks up the namespace where the virtual network device resides according to ifindex. The execution result of the script in different cases is as follows:
# sh find_links.sh = = > if ifindex is not specified, all link devices for namespaces are listed. # sh find_links.sh1-3gt8phomoc:1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1\ link/loopback 0000 brd 0000 brd 0000 brd 0000VlV 0000VlV 0000VlV 0000VO 0000VO: ip_vti0@NONE: mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1\ link/ipip 0.0.0.0 brd 0.0.0.03: br0: mtu 1450 qdisc noqueue state UP mode DEFAULT group default\ link/ether e6:c5 : 04:ad:7b:31 brd ff:ff:ff:ff:ff:ff74: vxlan0: mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default\ link/ether e6:c5:04:ad:7b:31 brd ff:ff:ff:ff:ff:ff link-netnsid 076: veth0@if75: mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default\ link/ether e6:fa:db:53:40:fd brd ff:ff:ff:ff:ff:ff link-netnsid 1 Ingress_sbox:1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1\ link/loopback 0000 qdisc noqueue state UP mode DEFAULT group default 0000 brd 0012: ip_vti0@NONE: mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1\ link/ipip 0.0.0.0 brd 0.0.0.075: eth0@if76: mtu 1450 qdisc noqueue state UP mode DEFAULT group default\ link/ether 02:42:0a:ff : 00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 078: eth2@if79: mtu 1500 qdisc noqueue state UP mode DEFAULT group default\ link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid sh find_links.sh 76 = > specify ifindex=761-3gt8phomoc:76: veth0@if75: mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default\ link/ether e6:fa:db:53:40:fd Brd ff:ff:ff:ff:ff:ff link-netnsid 1 practice-Internal Load Balancing deploys a service using a uber-svc network that we have created
Docker service create-- name uber-svc-- network uber-svc-p 80:80-- replicas 2 nigelpoulton/tu-demo:v1
The two containers deployed are on the manager and worker nodes, respectively:
# docker service lsID NAME MODE REPLICAS IMAGE PORTSpfnme5ytk59w uber-svc replicated 2 nigelpoulton/tu-demo:v1 *: 80-> 80/tcp# docker service ps uber-svcID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTSkh8zs9a2umwf uber-svc.1 nigelpoulton/tu-demo:v1 mysql-nginx Running Running 57 seconds ago31p0rgg1f59w uber-svc.2 nigelpoulton/tu-demo:v1 MyTest03 Running Running 49 seconds ago
Description ⚠️:
-p of course you can also use-- publish instead of-p, which is intended to expose the service inside the container to the host so that we can access the services.
Generally speaking, after we deploy service in swarm, only one Nic in the container uses docker0 network. When we publish the service, swarm will do the following:
(1) add three network cards eth0 and eth2,eth3,eth0 to the container to connect the overlay network name is ingress for communication between different hosts, and the eth2 connection bridge network name is docker_gwbridge, which is used to enable the container to access the public network. Eth3 connects to the mynet network we created, and the same function is used for access between containers (unlike the dns parsing as a service discovery feature in the eth3 network).
(2) each swarm node publishes services outside the cluster using ingress overlay network load balancer.
Check the network cards of the uber-svc.1 container and uber-svc network namespace
(1) check the uber-svc.1 container first.
# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESa2a763734e42 nigelpoulton/tu-demo:v1 "python app.py" About a minute ago Up About a minute 80/tcp uber-svc.1.kh8zs9a2umwf9cix381zr9x38
(2) check the Nic in the uber-svc.1 container
# sh docker_netns.sh uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: ip_vti0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.054: Eth0@if55: mtu 1450 qdisc noqueue state UP group default link/ether 02:42:0a:ff:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.255.0.5/16 brd 10.255.255.255 scope global eth0 valid_lft forever preferred_lft forever56: eth3@if57: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff link- Netnsid 2 inet 172.19.0.3/16 brd 172.19.255.255 scope global eth3 valid_lft forever preferred_lft forever58: eth2@if59: mtu 1450 qdisc noqueue state UP group default link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet 10.0.0.3/24 brd 10.0.0.255 scope global eth2 valid_lft forever preferred_lft forever
Of course, you can also check it directly with the following command:
Docker exec uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip addr
(3) View the network card of the uber-svc network namespace
# ip netns = = > View manager network namespace d2feb68e3183 (id: 3) 1-kzxwwwtunp (id: 2) lb_kzxwwwtun1-8lyfiluksq (id: 0) ingress_sbox (id: 1) # docker network ls = = > View manager cluster network NETWOrk ID NAME DRIVER SCOPEe01d59fe00e5 bridge bridge local7c90d1bf0f62 docker_gwbridge bridge local15426f623c37 host Host local8lyfiluksqu0 ingress overlay swarmdd5d570ac60e none null localkzxwwwtunpqe uber-svc overlay swarmsh docker_netns.sh 1-kzxwwwtunp ip addr = = > View the network card of the uber-svc network namespace 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00: 00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: ip_vti0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.03: br0: mtu 1450 qdisc noqueue state UP group default link/ether 3e:cb:12:d3:a3:cb brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global br0 valid_lft forever preferred_lft forever51: vxlan0: mtu 1450 qdisc noqueue master br0 state UNKNOWN group default link/ether e2:8e:35:4c:a3:7b brd ff:ff:ff:ff:ff:ff link-netnsid 053: veth0@if52: mtu 1450 qdisc noqueue master br0 state UP group default link/ether 3e:cb:12:d3:a3:cb brd ff:ff:ff:ff:ff:ff link -netnsid 159veth2@if58: mtu 1450 qdisc noqueue master br0 state UP group default link/ether 9e:b4:8c:72:4e:74 brd ff:ff:ff:ff:ff:ff link-netnsid 2
Of course, you can also use the following command:
Ip netns exec 1-kzxwwwtunp ip addr
# ip netns exec 1-kzxwwwtunp brctl show = = > View the interface of uber-svc network namespace bridge name bridge id STP enabled interfacesbr0 8000.3ecb12d3a3cb no veth0 veth2 vxlan0
Description ⚠️:
With the command docker exec uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip addr, you can see that the container network on the manager node has four network cards, namely, lo, eth0, eth2 and eth3.
Where the veth pair corresponding to eth2 is the veth2,eth3 in the uber-svc network and the corresponding veth pair is the vethef74971 on the host.
Ip netns exec 1-kzxwwwtunp brctl show can see that veth2 is attached to the br0 bridge by looking at the installation of the bridge in uber-svc cyberspace.
(4) View the vxlan-id of uber-svc network
Ip netns exec 1-kzxwwwtunp ip-o-c-d link show vxlan0* vxlan id 4097 * Network connection diagram between the uber-svc network namespace and the service container
Get ingress Namespace Information
The main steps are as follows:
(1) obtain the network information of ingress
# docker network lsNETWOrk ID NAME DRIVER SCOPE8lyfiluksqu0 ingress overlay swarm
(2) get the namespace information of ingress
# ip netns1-8lyfiluksq (id: 0)
(3) get the ip information in the namespace of ingress
# sh docker_netns.sh 1-8lyfiluksq ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00VlV 00VL 00VO 00 brd 00VlV 0000VlV 0000VlV 0000VlV 0000Rod 0000 inet 127.0.0.1 qdisc noop state DOWN group default qlen 8 scope host lo valid_lft forever preferred_lft forever2: ip_vti0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.03: br0: mtu 1450 qdisc noqueue state UP group default link/ether 6e:5c:bd:c0:95:ea brd ff:ff:ff:ff:ff:ff inet 10.255.0.1/16 brd 10.255.255.255 scope global br0 valid_lft forever preferred_lft forever45: vxlan0: mtu 1450 qdisc noqueue master br0 state UNKNOWN group default link/ether e6:f3:7a:00:85:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 047: veth0@if46: mtu 1450 Qdisc noqueue master br0 state UP group default link/ether fa:98:37:aa:83:2a brd ff:ff:ff:ff:ff:ff link-netnsid 155: veth2@if54: mtu 1450 qdisc noqueue master br0 state UP group default link/ether 6e:5c:bd:c0:95:ea brd ff:ff:ff:ff:ff:ff link-netnsid 2
(4) get the ID information of vxlan0 in the namespace of ingress
# sh docker_netns.sh 1-8lyfiluksq ip-d link show vxlan0* vxlan id 4096 *
(5) obtain the corresponding veth pair information in the namespace of ingress
# sh find_links.sh 46ingress_sbox:46: eth0@if47: mtu 1450 qdisc noqueue state UP mode DEFAULT group default\ link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0ingress network connection diagram between the network namespace and the service container
Get ingress_sbox network namespace information
The main steps are as follows:
(1) obtain the ip information of ingress_sbox
# sh docker_netns.sh ingress_sbox ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: ip_vti0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.046: eth0@if47: Mtu 1450 qdisc noqueue state UP group default link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.255.0.2/16 brd 10.255.255.255 scope global eth0 valid_lft forever preferred_lft forever inet 10.255.0.4/32 brd 10.255.0.4 scope global eth0 valid_lft forever preferred_lft forever49: eth2@if50: mtu 1500 qdisc noqueue state UP group default Link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet 172.19.0.2/16 brd 172.19.255.255 scope global eth2 valid_lft forever preferred_lft forever
(2) obtain the veth pair interface information of ingress_sbox
# sh find_links.sh 471-8lyfiluksq:47: veth0@if46: mtu 1450 qdisc noqueue master br0 state UP mode DEFAULT group default\ link/ether fa:98:37:aa:83:2a brd ff:ff:ff:ff:ff:ff link-netnsid 1
(3) obtain veth pair interface information on manager host
# ip link show1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:002: ens37: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:25:8b:ac brd ff:ff:ff:ff:ff:ff3: docker0: mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:cf:31:ee: 03 brd ff:ff:ff:ff:ff:ff14: ip_vti0@NONE: mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.048: docker_gwbridge: mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:9c:aa:15:e6 brd ff:ff:ff:ff:ff:ff50: vetheaa661b@if49: mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default link/ether 8a : 3e:01:ab:db:75 brd ff:ff:ff:ff:ff:ff link-netnsid 157vethef74971@if56: mtu 1500 the network connection diagram between the qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default link/ether 82:5c:65:e1:9c:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 3ingress network namespace and the ingree_sbox network namespace
Description: the situation on the swarm worker node is the same as the basic idea of manager ~
Overall network connection diagram of Swarm
Description ⚠️:
(1) you can see that the ingress_sbox and the ns that created the container share an ingress network space.
(2) the network flow can be seen more intuitively by using docker exec [container ID/name] ip r, as follows:
# docker exec uber-svc.1.kh8zs9a2umwf9cix381zr9x38 ip rdefault via 172.19.0.1 dev eth310.0.0.0/24 dev eth2 proto kernel scope link src 10.0.0.310.255.0.0/16 dev eth0 proto kernel scope link src 10.255.0.5172.19.0.0/16 dev eth3 proto kernel scope link src 172.19.0.3
It can be seen that the default gateway of the container is 172.19.0.1, which means that the container goes out through eth3.
Last
There are still many knowledge points to explore about the underlying network problems of Docker Swarm. This section makes a basic summary of the recently learned docker network. If there are any mistakes or deficiencies, please correct them, thank you!
Also: if there is any infringement in the reference document, please contact me in time and delete it.
Finally, thanks to open source, embrace open source!
Reference documentation
(1) detailed explanation of LB and service discovery in Docker swarm
(2) long text of ten thousand words: talk about the implementation principles of several mainstream Docker networks
(3) Docker cross-host network-overlay
(4) Docker cross-host network overlay (16)
(5) Docker overlay overlay network and detailed explanation of VXLAN
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.