Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build overlay Network in Docker

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article is about how to build overlay network in Docker. Xiaobian thinks it is very practical, so share it with everyone to learn. I hope you can gain something after reading this article. Let's not say much. Let's take a look at it together with Xiaobian.

Build and test a Docker overlay network in swarm mode

In the following example, two Docker hosts are required, connected through routers on two separate Layer 2 networks, as shown in the following figure:

1) Building a Swarm

The first thing we need to do is configure the two hosts as a two-node swarm. Run the command docker swarm init on node1 to make node1 a manager, and then run docker swarm join on node2 to make node2 a worker.

Run the command on node1:

$ docker swarm init \--advertise-addr=172.31.1.5 \--listen-addr=172.31.1.5:2377Swarm initialized: current node (1ex3... o3px) is now a manager.

Run the command on node2:

$ docker swarm join \--token SWMTKN-1-0hz2ec... 2vye \172.31.1.5:2377This node joined a swarm as a worker.

At this point, you have created a swarm that contains two nodes, node1 and node2.

2) Create an overlay network

Create an overlay network called uber-net.

Run the following command on node1 (manager): docker network create -d overlay uber-net

This creates a new overlay network that is available to all hosts in Swarm and contains a control plane encrypted with TLS. If you want to encrypt the data plane, you need to add a parameter to the command: -o encrypt.

You can use the docker network ls command to view the network on each node.

The uber-net network we created is at the low end of the list. Other networks are created when Docker is installed and when Swarm is initialized.

If you run docker network ls in node2, you will notice that you cannot see the uber-net network. This is because the new network is visible to worker nodes only when a running container connects to the network. This delay method improves network scalability by reducing the number of networks.

3) Connect services to overlay networks

Now that we have an overlay network, let's create a new Docker service and connect to the overlay network. The service we will create contains two copies (containers), so one can run on node1 and the other can run on node2. This will automatically extend the uber-net network to node 2.

Run the following command on node1:

docker service create --name test --network uber-net --replicas 2 ubuntu sleep infinity

A service named test was created and connected to the uber-net network, creating 2 replicas (containers) based on the provided mirrors. In the example above, we sent the sleep command to the container to keep the container running and exit after stopping.

Since we are running 2 replicas (containers), Swarm has 2 nodes and there will be one replica on each node. It can be viewed using docker service ps as follows:

When Swarm starts a container on an overlay network, it automatically extends the network to the nodes running the container. This means that the uber-net network is visible on node2.

4) Test overlay network

Now use ping to test the overlay network.

As shown below, we get 2 Docker hosts on separate networks, both connected to a single overlay network. At each node there is a container connected to the overlay network. You can ping to see if there is connectivity between two containers.

To perform the test, we need to know the IP address of each container. By running docker network inspect, we can see the subnet assigned to the overlay.

The output above shows that the subnet for uber-net's is 10.0.0.0/24. Note that this does not match any network that has a physical infrastructure (172.31.1.0/24 and 192.168.1.0/24).

Run the following command on Node 1 and Node 2, which will get the ID and IP address of the container.

The final uber-net's information is shown below:

We can see that the Layer 2 overlay network spans two hosts, and each container on the overlay network has an IP address. This means that containers on node1 can ping containers on node2 with 10.0.0.4. Although these two nodes are located in different layer 2 infrastructure networks, they can still ping.

Log into the container on node1 and ping the remote container.

The ping toolkit needs to be installed in the Linux Ubuntu container. As follows:

Using overlay networks, containers on node1 can ping containers on node2. You can also trace the routing of ping commands within containers.

So far, we have created overlay networks with a single command and plugged containers into the network. The containers are distributed on two different hosts belonging to different Layer 2 networks. After finding out the IP of two containers, it is verified that containers can be connected directly through overlay network.

working principle

Now we know how to build and use a container overlay network. Let's find out how it works.

1) Getting started with VXLAN

First, Docker overlay networks use VXLAN tunnels to create virtual Layer 2 overlay networks. Before we move on, let's get started with VXLAN quickly.

At the highest level, VXLAN allows the creation of a virtual Layer 2 network on top of an existing Layer 3 infrastructure. The example we used earlier creates a new Layer 2 network of 10.0.0.0/24, which is based on a Layer 3 IP network consisting of two Layer 2 networks-172.31.1.0/24 and 192.168.1.0/24, as shown in the following figure:

The beauty of VXLAN is that it is an encapsulation technique that makes existing routing and network infrastructure look like ordinary IP/UDP packets. No problem handling it.

To create a virtual Layer 2 network, a VXLAN tunnel was created over the underlying Layer 3 IP infrastructure. You've probably heard of basic networks, which are often used to refer to the basic parts below three layers.

Each end of the VXLAN tunnel has a VXLAN endpoint (VTEP). This VTEP performs encapsulation and decapsulation as well as operations necessary for some functionality. As shown below:

2) Sort out two container examples

In this example, we have two hosts connected via an IP network. Each host runs a single container, and we create a single VXLAN network and have containers connected to it.

To accomplish this, create a new sandbox on each host. As described in the previous section, a sandbox acts like a container, but instead of running a runtime, it runs an isolated network stack

A virtual switch (also called virtual bridge) called Br0 is created inside the sandbox, and a VTEP is also created, one end inserted into the Br0 virtual switch and the other end inserted into the host network stack (VTEP).

One end of the host network stack takes the IP address from the underlying network to which the host is connected and binds it UDP to port 4789. Two VTEPs on each host create an overlay over VXLAN. As follows:

This is the basis of the VXLAN overlay network.

Each container gets its own virtual Ethernet (veth) adapter and plugs into the local Br0 virtual switch. The topology is shown below.

3) Examples of communication

Now that we've seen the main plug-in elements, let's look at communication between two containers.

Let's call the container on node1 C1 and the container on node2 C2, assuming that C1 wants to ping C2.

C1 creates a ping request and sets the destination IP address to 10.0.0.4 of C2. Traffic is sent over the veth interface connected to the Br0 virtual switch. The virtual machine doesn't know where to send packets because it doesn't have any entries in its MAC address table that correspond to the destination IP address. Therefore, it floods packets to all ports. The VTEP interface connected to Br0 knows how to forward this frame and therefore responds with its own MAC address. This is an ARP proxy replay. And the Br0 switch learned how to forward packets. So it updates the ARP routing table to map 10.0.0.4 to the MAC address of the local VTEP.

Now that the Br0 virtual switch has learned how to forward traffic to C2, all future packets sent to C2 will be forwarded directly to the VTEP interface. The VTEP interface knows about C2 because all newly launched containers send their network details to other nodes in the same Swarm cluster using the network's built-in Gossip protocol.

The switch forwards the packet to the VTEP interface, and the VTEP encapsulates the data frame so that it can be transmitted on the underlying network. Specifically, encapsulation is the addition of VXLAN Header information to an Ethernet frame.

The VXLAN Header information contains the VXLAN network ID (VNID), which records the mapping relationship between VLAN and VXLAN. Each VLAN has a VNID so that packets can be parsed and forwarded to the correct VLAN.

When encapsulating, the data frame will be put into the UDP packet, and the destination IP field of UDP will be set to the IP address of the VTEP of node2, and the UDP Socket port will be set to 4789. This encapsulation ensures that the underlying network can complete data transmission even if it does not know anything about VXLAN.

When the packet arrives at node2, the kernel discovers that the destination port is UDP port 4789, and also knows that there is a VTEP interface bound to this Socket. So the kernel sends the packet to the VTEP, which reads the VNID, decompresses the packet information, and sends it to the VLAN connected switch locally named Br0 based on the VNID. On this switch, packets are sent to container C2.

Docker supports Layer 3 routing using the same overlay network. For example, the reader could create an overlay network with two subnets, and Docker would take care of routing between the subnets. Create commands such as docker network create --subnet= 10.1.1.0/24--subnet= 11.1.1.0/24-d overlay prod-net. This command creates two virtual switches in the Sandbox, with routing supported by default.

The above is how to build overlay network in Docker. Xiaobian believes that some knowledge points may be seen or used in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report