In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
This article will cover some of the concepts of Docker networking so that you can take full advantage of these capabilities when designing and deploying applications.
First of all, the network subsystem of Docker is pluggable and driven, and a variety of network interfaces exist or support by default, such as bridge, host, overlay, macvlan and none type network interfaces.
Next, let's first discuss the bridge (bridging) network model.
Bridge is the default network mode of docker. If you do not specify a type, this is the network type you are creating. Bridge mode assigns a Network Namespace, IP, and so on to each container, and connects the container's network to a docker0.
Features: all containers on the same host machine will be on the same IP address range by default (default IP address range: 172.17.0.0thumb 16), and can communicate with each other and access the external network (provided that the host can access the external network).
A bridge is a link layer device that forwards traffic between multiple network segments. A bridge can be a software or hardware device running in the host kernel.
Docker uses a software bridge that allows communication between containers connected to the same bridge while providing isolation from containers that are not connected to the bridge network. Docker bridges automatically set relevant rules (using iptables), and containers on different bridges cannot communicate with each other directly.
Bridges are suitable for containers under the same Docker daemon. For communication between containers running under different Docker daemons, routes can be added on the host or overlay networks can be used.
When you start Docker daemon, a default bridge network (docker0) is automatically created and iptables is also launched to set your access rules. You can create a user-defined bridge network, which is superior to the default bridge (docker0) with the following characteristics:
1. User-defined bridges provide better isolation and interoperability between containerized applications.
Containers connected under the same bridge belong to the same network, so all ports can access each other, but they are not open to external networks, which makes containerized applications communicate with each other easily and improves security.
Imagine an application architecture with web front-end and back-end, plus database (user web entry back-end application database). External networks only need to access the web front end (such as port 80), but only back-end applications need to access the database. If you use a user-defined bridge, you only need to open the web port to the outside world, the database does not need to open any port, and the front and rear ends can access the database through the user-defined bridge.
2. The user-defined bridge provides automatic DNS resolution between containers.
Containers created on the default bridge can only access each other through IP addresses, unless you use the-- link option, but-- link needs to be created in both directions of the container, which becomes complicated for more than two containers that need to communicate. On the user-defined bridge network, containers can resolve each other by name or alias.
Recall that when using physical hosts or VM hosts, the host name or IP in hosts is usually used in the configuration file of our application, but now in the container, if we use a custom network, we just need to rewrite it as the container name, and we don't need to pay too much attention to the host name or IP.
3. Containers that use user-defined bridges can be disconnected or connected to different (user-defined) networks at any time.
During the lifecycle of containers, you can dynamically switch network connections between containers. For example, you have created custom my-net01 and my-net02 bridging networks, where containers can switch dynamically with each other. If you created the container through the default bridge, you can also switch to a custom bridge without having to delete the container rebuild.
Note: in the actual verification process, it seems to be different from the description of the official document.
4. Each user-defined network creates a configurable bridge.
If all containers use the default bridge network, although the configuration can be modified, all containers use the same settings, such as MTU and iptables rules. In addition, configuring the default bridge network requires the Docker process to be restarted.
If you create and configure a user-defined bridge network using docker network create. If different application groups have different network requirements, you can configure each user-defined bridge as you create it.
5. Environment variables can be shared between containers using the default bridge.
At first, the only way to share environment variables between two containers was to use-- link. This type of variable sharing is not possible in a user-defined network. However, there are better ways to share environment variables.
(1) multiple containers can use Docker's volume (Volume) to load files or directories containing shared information.
(2) multiple containers can be started together using docker compose, and shared variables can be defined in the compose file.
(3) swarm services can be used instead of separate containers and share keys and configurations.
Manage custom bridge network
1. Use docker network ls to view the default supported network
# docker network ls
NETWORK ID NAME DRIVER SCOPE
E22a6ab223fe bridge bridge local
15b417347346 host host local
5926c0bd11d0 none null local
2. Create a custom bridge network using docker network create
# docker network create my-web-net01
# docker network ls
NETWORK ID NAME DRIVER SCOPE
E22a6ab223fe bridge bridge local
15b417347346 host host local
899362727b48 my-web-net01 bridge local
5926c0bd11d0 none null local
As you can see, we did not specify-- driver=bridge to create, because the default is bridge mode, of course, you can also specify the subnet, IP range, gateway and other items of the custom bridge, such as:
# docker network create-driver=bridge-subnet=172.23.10.0/24 my-web-net02
Or a little more refinement.
# docker network create\
-- driver=bridge\
-- subnet=172.24.0.0/16\
-- ip-range=172.24.10.0/24\
-- gateway=172.24.10.254\
My-web-net03
Delete a custom bridge network
# docker network ls
NETWORK ID NAME DRIVER SCOPE
E22a6ab223fe bridge bridge local
15b417347346 host host local
899362727b48 my-web-net01 bridge local
49352768c5dd my-web-net02 bridge local
7e29b5afd1be my-web-net03 bridge local
5926c0bd11d0 none null local
# docker network rm my-web-net01
Or refers to quantity deletion.
# docker network rm $(docker network ls-f name=my-web-Q) # this command is meant to list and delete networks whose names contain my-web
Containers in the same custom bridge access each other through container names
1. Create a custom network my-web-net01
# docker network create my-web-net01
2. Create containers T01 and T02 and use my-web-net01 network
# docker run-idt-- network=my-web-net01-- name T01 busybox / bin/sh
# docker run-idt-- network=my-web-net01-- name T02 busybox / bin/sh
3. Use the ping command to ping each other from T01 and T02 using container names to verify the interoperability of the network.
# docker exec-it T01 ping T02
# docker exec-it T01 ping T02
Conclusion: containers in the same custom network can access each other through container names
Containers in the default bridge cannot access each other through container names
1. Create T03 and T04 containers in the default bridge
# docker run-idt-- name=t03 busybox / bin/sh
# docker run-idt-- name=t04 busybox / bin/sh
2. Use the ping command to ping each other from T03 and T04 using container names to verify the interoperability of the network.
# docker exec-it T03 ping T04
Ping: bad address 't04'
# docker exec-it T04 ping T03
Ping: bad address 't03'
3. Use the ping command to ping each other using ip addresses from T03 and T04 respectively to verify the interoperability of the network.
# docker exec-it T03 ifconfig
# docker exec-it T04 ifconfig
# docker exec-it T03 ping 172.17.0.6
# docker exec-it T04 ping 172.17.0.4
Conclusion: containers in the default bridge cannot access each other through container names, but can access each other through ip.
Custom containers in the bridge, which can dynamically switch network connections between containers
1. Create a custom bridge my-web-net02
# docker network create my-web-net02
2. Add the container T02 in the my-web-net01 bridge to the my-web-net02 network at the same time, and then look at the network of T02. As shown in figure 1.1, it is found that T02 has an extra eth2 network card.
# docker network connect my-web-net02 t02
Figure 1.1
3. Move T02 out of the my-web-net01 network, and then we look at the network connection of T02 again. As shown in figure 1.2, we find that there is only an eth2 network card.
# docker network disconnect my-web-net01 t02
4. Verify again whether T02 can still communicate with T01, as shown in figure 1.3.
Figure 1.3
Conclusion: it is impossible to communicate by default, even with ip, because T01 belongs to my-web-net01,t02 and belongs to my-web-net02.
5. Add T02 back to the my-web-net01 network to verify the interoperability, as shown in figure 1.4.
# docker network connect my-web-net01 t02
Figure 1.4
Conclusion: T01 and T02 can communicate each other.
Add the container in the default bridge to the custom bridge to verify interoperability
1. Add T03 to the custom bridge my-web-net01 and verify the interoperability between T03 and T01 and T04, as shown in figure 1.5.
Figure 1.5
2. Removing the container from a network and then rejoining the Nic name will change, as shown in figure 1.6.
Figure 1.6
Summary
Whether it is a custom bridge or the container in the default bridge, you can dynamically switch the network connection between containers. In addition, the name of the ENI will change when the container is switched over (the name will be restored after restarting the container). This will cause problems for applications that need to bind the name of the ENI, such as Ali's tair cache.
If you want to do what you want, nothing is impossible. Come on!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.