Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Implementation method of Docker Cross-host Network

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail the implementation of Docker cross-host network for you. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.

1. Docker cross-host communication

Docker cross-host network solutions include:

Docker native overlay and macvlan.

Third-party solutions: commonly used include flannel, weave, and calico.

Docker integrates the above solutions with docker through libnetwork and CNM.

Libnetwork is a docker container network library, and the core content is its defined Container Network Model (CNM). This model abstracts the container network and is composed of the following three types of components:

1.1 Sandbox

Sandbox is the network stack of the container and contains the container's interface, routing table, and DNS settings. Linux Network Namespace is the standard implementation of Sandbox. Sandbox can contain Endpoint from different Network. In other words, Sandbox isolates one container from another through Namespace, a container contains a sandbox, and each sandbox can have multiple Endpoint belonging to different networks.

1.2 Endpoint

The purpose of Endpoint is to connect Sandbox to Network. A typical implementation of Endpoint is veth pair. An Endpoint can only belong to one network, and can only belong to one Sandbox.

1.3 Network

Network contains a set of Endpoint, and Endpoint of the same Network can communicate directly. The implementation of Network can be Linux Bridge, VLAN, and so on.

Docker network architecture

Pictures up to CLOUDMAN blog.

Libnetwork contains the above native driver and other third-party driver.

None and bridge networks have been introduced earlier. Bridge is a bridge, a virtual switch, which is connected to sandbox through veth.

2. Docker overlay network

2.1 start the key-value database Consul

Docerk overlay network needs a key-value database to store network state information, including Network, Endpoint, IP and so on. Consul, Etcd, and ZooKeeper are all key-vlaue software supported by Docker.

Consul is a kind of key-value database, which can be used to store the state information of the system, of course, we do not need to write code here, we only need to install consul, and then docker will automatically store the state. The easiest way to install the consul database is to run the consul container directly using docker.

Docker run-d-p 8500 docker run-h consul-- name consul progrium/consul-server-bootstrap

After startup, you can view the consul service through port 8500 of host ip.

In order for consul to discover each docker host node, it needs to be configured on each node. Modify the configuration file / etc/systemd/system/docker.service for each node docker daemon. Add at the end of ExecStart

-cluster-store=consul://:8500-cluster-advertise=ens3:2376

Where represents the node IP that is running the consul container. Ens3 is the network card corresponding to the ip address of the current node. You can also enter the ip address directly.

The above is the installation method of stand-alone consul. It is recommended to use cluster mode. For more information on cluster mode installation, please see https://www.consul.io/intro/getting-started/join.html.

2.2 create an overlay network

Creating an overlay network is basically the same as creating a bridge network before, except that the-d parameter is set to overlay. As follows:

Docker network create-d overlay ov_net2

Docker network create-d overlay ov_net3-- subnet 172.19.0.0 subnet 24-- gateway 172.19.0.1

The above creation process only needs to be done in one node, and the other nodes will automatically recognize the network, precisely because of the service discovery capability of consul.

Later, when you create a container, you only need to specify-- the network parameter is ov_net2.

Docker run-network ov_net2 busybox

This allows direct access to each other even if containers created on the same overlay network are used on different hosts.

2.3 principles of overlay network

After creating another overlay network, you can see through docker network ls that there is not only one more ov_net2 we created (type overlay and scope is global), but also one named docker_gwbridge (type bridge and scope is local). This is actually how the overlay network works.

As can be seen from the brctl show, every time a container with a network type of overlay is created, a vethxxx will be mounted under the docker_gwbridge, which indicates that the overlay container is connected to the outside world through this bridge.

To put it simply, the overlay network data is still out of the bridge network docker_gwbridge, but because of the role of consul (recording the endpoint, sandbox, network and other information of the overlay network), docker knows that the network is overlay type, so that different hosts under the overlay network can access each other, but in fact, the exit is still in the docker_gwbridge bridge.

None and bridge networks have been introduced earlier. Bridge is a bridge, a virtual switch, which is connected to sandbox through veth.

Third, the port mapping method of whether the public network can access the container:

[root@localhost ~] # ss-lnt

/ / check the socket (IP address and port)

1) manually specify the port mapping relationship

[root@localhost ~] # docker pull nginx

[root@localhost ~] # docker pull busybox

[root@localhost ~] # docker run-itd nginx:latest// starts a nginx virtual machine without any parameters [root@localhost ~] # docker ps// to view container information

[root@localhost ~] # docker inspect vigorous_shannon// view container details (see IP now)

[root@localhost ~] # curl 172.17.0.2

[root@localhost ~] # docker run-itd-- name web1-p 90:80 nginx:latest// opens a virtual machine designated link port

Second access

[root@localhost ~] # curl 192.168.1.11purl 90

2) randomly map the port to the container from the host.

[root@localhost ~] # docker run-itd-- name web2-p 80 nginx:latest// opens a virtual machine random link port [root@localhost ~] # docker ps

Second access

[root@localhost ~] # curl 192.168.1.11purl 32768

3) randomly map the port from the host to the container, and all exposed ports in the container will be mapped one by one.

[root@localhost] # docker run-itd-- name web3-P nginx:latest

/ / randomly map the port from the host to the container, and all exposed ports in the container will be mapped one by one

[root@localhost ~] # docker ps

Second access

[root@localhost ~] # curl 192.168.1.11purl 32769

4. Join container: container (shared network protocol stack)

Between the container and the container.

[root@localhost ~] # docker run-itd-- name web5 busybox:latest// starts a virtual machine based on busybox [root@localhost ~] # docker inspect web5

[root@localhost ~] # docker run-itd-- name web6-- network container:web5 busybox:latest// starts another virtual machine [root@localhost ~] # docker exec-it web6/ bin/sh// and enters web6/ # ip a

/ # echo 123456 > / tmp/index.html/ # httpd-h / tmp/// Simulation enable httpd service [root@localhost ~] # docker exec-it web5/ bin/sh// enter web5/ # ip a

# wget-O-- Q 127.0.0.1. / at this point, you will find that the IP addresses of both containers are the same.

Scenarios in which this method is used:

Due to the particularity of this network, generally running the same service, and qualified services need to do monitoring, log collection, or network monitoring, you can choose this kind of network.

Fifth, docker's cross-host network solution

The solution of overlay

Experimental environment:

Docker01docker02docker031.111.121.20

Firewall and selinux security issues are not considered for the time being.

Turn off all three dockerhost firewalls and selinux and change the host name respectively.

[root@localhost ~] # systemctl stop firewalld// off Firewall [root@localhost ~] # setenforce 0 user / off selinux [root@localhost ~] # hostnamectl set-hostname docker01 (docker02, docker03) / / change host name [root@localhost ~] # su-/ / switch root users

Operations on docker01

[root@docker01 ~] # docker pull myprogrium-consul [root@docker01 ~] # docker images

Run the consul service

[root@docker01 ~] # docker run-d-p 8500 consul-- name consul-- restart always progrium/consul-server-bootstrap-h: hostname-server-bootstrap: indicates that you are server// and runs a virtual machine based on progrium/consul (restart docker if an error is reported)

After the container is produced, we can access the consul service through the browser to verify whether the consul service is normal. Access the dockerHost plus mapped port.

[root@docker01 ~] # docker inspect consul// View container details (see IP now) [root@docker01 ~] # curl 172.17.0.7

Browser view

Modify docker configuration files for docker02 and docker03

[root@docker02 ~] # vim / usr/lib/systemd/system/docker.service # 13 add ExecStart=/usr/bin/dockerd-H unix:///var/run/docker.sock-H tcp://0.0.0.0:2376-- cluster-store=consul://192.168.1.11:8500-- cluster-advertise=ens33:2376// to pass the native / var/run/docker.sock through ens33:2376 Save to the consul service [root@docker02 ~] # systemctl daemon-reload [root@docker02 ~] # systemctl restart docker of 192.168.1.11 systemctl daemon-reload 8500

Go back to the browser consul service interface and find KEY/NALUE--- > DOCKER---- > NODES

You can see the nodes docker02 and docker03

Customize a network on docker02

[root@docker02 ~] # docker network create-d overlay ov_net1// create an overlay network [root@docker02 ~] # docker network ls// view the network

If you look at the network on docker03, you can see that the ov_net1 network has also been generated.

[root@docker03 ~] # docker network ls

Check the browser.

Modify the docker configuration file of docker01 and look at the network on docker01. You can see that the ov_net1 network has also been generated.

[root@docker01 ~] # vim / usr/lib/systemd/system/docker.service # 13 add ExecStart=/usr/bin/dockerd-H unix:///var/run/docker.sock-H tcp://0.0.0.0:2376-- cluster-store=consul://192.168.1.11:8500-- cluster-advertise=ens33:2376// to pass the native / var/run/docker.sock through ens33:2376 Save to the consul service of 192.168.1.11 systemctl daemon-reload 8500 [root@docker02 ~] # systemctl daemon-reload [root@docker02 ~] # systemctl restart docker// restart docker [root@docker03 ~] # docker network ls// to view the network

Each of the three Docker runs a virtual machine based on the network ov_net1 to test whether the three can ping each other.

[root@docker01 ~] # docker run-itd-- name T1-- network ov_net1 busybox [root@docker02 ~] # docker run-itd-- name T2-- network ov_net1 busybox [root@docker03] # docker run-itd-- name T3-- network ov_net1 busybox [root@docker01 ~] # docker exec-it T1 / bin/sh [root@docker02 ~] # docker exec-it T2 / bin/sh [root@docker03 ~] # docker exec-it T3 / bin/sh

/ # ping 10.0.0.2

/ # ping 10.0.0.3

/ # ping 10.0.0.4

* * for the network created on docker02, we can see that its SCOPE defines global (global), which means that all docker services added to the consul service can see our custom network.

Similarly, if the container is created with this network, there will be two network cards.

By default, the network segment of this network-card is 10.0.0.0. If you want docker01 to see this network, you only need to add the corresponding content to the docker configuration file of docker01.

By the same token, because it is a custom network and conforms to the characteristics of a custom network, you can communicate with each other directly through the name of the docker container. Of course, you can also specify its network segment when you customize the network, then the container using this network can also specify the IP address.

This is the end of this article on "the implementation of Docker cross-host network". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report