In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "how to implement Docker cross-server communication Overlay". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "Docker cross-server communication Overlay how to achieve" it!
Scene
The company's micro services will be launched soon, and they are all deployed using Docker containers. Under the same host, all services are deployed. The IP and PORT registered to Nacos are both port numbers defined in IP and Dockerfile in the private network. It seems that there is no problem. It is possible to call them through the gateway. Please note that there is a major premise:
Only if all service containers must be deployed on the same host!
When the service instance is not deployed on the same host, for example, the gateway service is on server A, and service an is on server B, it is also registered to Nacos (or other registry), and the IP is reported from the internal network. When there is a request from the outside, the gateway finds the private network IP of the corresponding service a through the service list of Nacos. After calling, it is found that the call cannot be made.
Ps: how can the intranet be connected?
Task
Micro-service containers can call each other from different servers.
Ideas
Since the report is the IP of the private network, I will directly ask him to report the IP and port of the host.
Host network mode using Docker
Modify the deployment script to obtain the mapping port number of the host IP and settings when deploying the container through shell
Let the network of Docker interwork
Analysis.
The following are explained according to the "ideas" section above.
1. Looking through the official documents and Github, there are two more solutions:
Fix the IP port and write down the host IP and port in the configuration file: it seems to be solved, but the problem is that it cannot be scaled horizontally-barely.
Fixed network card to prevent error IP port from being reported in multiple network card environment: useless. Ifconfig found that there are only two internal network cards in the container, namely eth0 and lo. The IP of the corresponding network card is the intranet IP-- or useless.
two。 Using Docker's Host network mode, you will find that IP does report host IP this time, but the port number is not correct. If you pass in the port number to be mapped through shell using the Java parameter, this situation is theoretically feasible, but the only disadvantage is that docker ps can no longer see the port number directly, so you need to docker inspect it-- you can use it.
3. The mapping port number can be obtained, but the name of the Nic of the host is different, so it is not flexible to write it after death. What if some are eth0 or some are ens33? There are more unpredictable situations! -- maybe available
4. Shared through some mature Docker container network, but there will be some performance loss-fully available
Concept and selection
The safest way is to use Docker network sharing. With the help of the search engine, I decided to use Overlay to achieve the effect.
Here is a brief description of Overlay:
When communicating between two hosts, the container uses the overlay network network mode to communicate; if you can also use host to communicate across hosts, you can use this physical ip address to communicate directly. Overlay it will virtualize a network such as 10.0.2.3 the ip address. In this overlay network model, there is an address similar to a service gateway, and then the packet is forwarded to the address of the physical server, and finally routed and switched to the ip address of another server.
In order to realize the Overlay network, it is necessary to introduce a Kmuri V database to save the network status information, including Network, Endpoint, IP and so on. Consul, Etcd and ZooKeeper are all Kmuri V databases supported by Docker.
We use Consul here. Compared with other Kmurv databases, the interface provided by Consul is easy to manage, so here we use Consul to implement Overlay.
Share the Docker intranet by letting the Docker daemon of each server register its own IP with Consul. The shared intranet here is in Overlay network mode, and can communicate with each other only by using the container of the same overlay network in the registered Docker environment.
Ps: after the creation is completed, the cross-server container of overlay network is not used and ping is not allowed.
Try one's hand on one's skill
Consul with single node implements Overlay network and uses Docker image.
Environment description
Server OS host IPDocker version Nic name Ubuntu Server 18.04 LTS192.168.87.13318.09.6ens33Ubuntu Server 18.04 LTS192.168.87.13918.09.7ens33
The version of Consul to be used is 1.5.2, as indicated on the Docker Hub that the vulnerability is currently the smallest.
This test environment is suitable for Systemd-managed Linux distributions.
Consul did not use the unofficial progrium/consul, mainly because the image is too old, four years ago if there is a loophole can not be fixed in time, so I went to the official pit! ?
Matters needing attention
Each host running docker cannot be the same as hostname, and can use the
$sudo hostnamectl set-hostname your-new-hostname
The same hostname will cause the host docker with the same name not to communicate with each other.
Do it by hand
Prepare Consul to start on the server using images, so you can first configure the startup parameters of Docker daemon to point to the server.
Modify the docker.service of 133,139 servers respectively
$ifconfig# has removed the interference network card The recording network card is named ens33ens33: flags=4163 mtu 1500 inet 192.168.87.133 netmask 255.255.255.0 broadcast 192.168.87.255 inet6 fe80::20c:29ff:fe02:e00a prefixlen 64 scopeid 0x20 ether 00:0c:29:02:e0:0a txqueuelen 1000 (Ethernet) RX packets 156739 bytes 233182466 (233.1 MB) RX errors 0 dropped 0 frame 0 TX packets 45173 bytes 2809606 (2.8MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0$ vim / etc/docker/daemon.json
Save and exit.
Cluster-store: the leader address of the configured Consul, which can be written directly. Other software should pay attention to the protocol.
Cluster-advertise: specify the network card and port to listen on, or the IP:PORT to receive subscription messages
Another method is to modify docker.service directly. For more information, please see:
$cd / etc/systemd/system/multi-user.target.wants$ sudo vim docker.service
Find the word ExecStart= and add the following code at the end of the line
-cluster-store=consul://192.168.87.133:8500-cluster-advertise=ens33:2375
The effect is as follows:
The operation effect is consistent with the above methods.
Then execute the command to restart the docker service. The other server operates in the same way. Pay attention to the name of the network card.
$sudo systemctl daemon-reload & & sudo systemctl restart docker
Start the Consul service on the server
Docker run-d-- network host-h consul-- name=consul-- restart=always-e CONSUL_BIND_INTERFACE=ens33 consul:1.5.2
The reason for using host mode is to prevent some ports from not being mapped, and the only way for Consul to identify external network cards is host mode. Here is a non-host way.
$docker run-di-h consul-p 8500 consul 8500-- name=consul consul:1.5.2
Create a Docker overlay shared network
$docker network create-d overlay my_overlay
This is different from ordinary network creation in that the network in overlay mode is specified, and-d can also be written as-- driver
Visit the page of Consul, such as mine is 192.168.87.133-8500
Our configuration is at Key/Value.
Click docker-> nodes
The two nodes above appear, which are the values registered by the two docker daemon (daemons)
test
Create two new centos containers on two servers, using the network we just created.
133 server
$docker run-di-network my_overlay-name mycentos1 centos:7
139 server
$docker run-di-network my_overlay-name mycentos2 centos:7
-- net is spelled as-- network,-- begins with no =
View the IP of the 133server mycentos1 container
$docker inspect-f "{{.NetworkSettings.Networks.my _ overlay.IPAddress}}" mycentos110.0.1.2
View the IP of the server mycentos2 container
$docker inspect-f "{{.NetworkSettings.Networks.my _ overlay.IPAddress}}" mycentos210.0.1.3
Respectively from the 133server ping 139server's mycentos2 intranet IP
On the contrary, ping is the same, but it is not for us to access it through the outside. It can be accessed through the container of the same overlay network. Do not believe us to try the following
133 server
$docker exec-it mycentos1 bash# ping 10.0.1.3
The access is reasonable, and there is no packet loss, and vice versa. If the space is limited, there will be no experiment.
At this point, I believe you have a deeper understanding of "how to achieve Docker cross-server communication Overlay". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.