In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "what is the principle of Docker Network". In daily operation, I believe many people have doubts about what is the principle of Docker Network. Xiaobian consulted all kinds of materials and sorted out simple and easy operation methods. I hope to help you answer the doubts about "what is the principle of Docker Network"! Next, please follow the small series to learn together!
Docker runs applications inside containers, which requires communication over a large number of different networks. This means Docker needs strong networking capabilities. Fortunately, Docker has container-container networking solutions as well as solutions that connect to existing networks and VLANs. The latter is important for boxing apps, which requires communication with external systems such as VM's and functions and services on physical machines.
Docker Network is based on an open source pluggable architecture called Container Network Model (CNM). libnetwork is a CNM implementation of Docker's real world. It provides all of Docker's core networking capabilities. Drivers plug into libnetwork networks to provide specific network topologies.
To create a smooth out-of-the-box experience, Docker uses a set of native drivers to handle most common networking needs. This includes single-host bridged networks, multi-host overlays, and the option to insert existing VLANs. Ecosystem collaborators extend this, possibly providing their own drivers in the future.
Finally, libnetwork provides native service discovery and basic container load balancing solutions.
1) Theory
At the highest level, Docker Network consists of three main components: Container Network Model (CNM), libnetwork, and Drivers.
CNM is a design specification. It outlines the basic building blocks of the Docker network.
libenetwork is the actual implementation of CNM, used by Dokcer. Written in Go, it implements the core components of CNM.
Drivers extend the model by implementing specific network topologies, such as overlay networks based on VXLAN.
1、Container Network Model (CNM)
Everything starts with design.
The Docker network design guide is CNM. It outlines the basic building blocks of the Docker network and can be read in its entirety at https://github. com/docker/libnetwork/block/master/docs/design.md. At a higher level, it defines three building blocks: Sandboxes, Endpoints, and Networks.
Sandbox is an isolated network stack that includes Ethernet interfaces, ports, routing tables, and DNS configuration.
Endpoints are virtual network interfaces (E.g. veth) that, like normal network interfaces, are responsible for establishing connections. In the case of CNM, endpoint studios connect sandboxes to the network.
Networks is a software implementation of 802.1d bridging (most commonly called switches). Therefore, they will need to communicate a series of endpoints grouped and isolated.
2、Libnetwork
CNM is the design doc and libnetwork is the typical implementation. It is open source, written in Go, cross-platform, and used by Docker.
In the early days of Docker, all network code existed in daemon, and now all core Docker network code exists in libnetwork. libnetwork implements all three components defined in CNM. It also implements local service discovery, ingress based container Load Balancer, network control plane and management plane functionality
3、Drivers
If libnetwork implements the plane and management plane, then the driver implements the data plane. For example, connectivity and isolation are completely controlled by drivers, which are the real creation of network objects. The relationship is as follows:
Docker has several built-in drivers, also known as native drivers or native drivers, including bridge, overlay and macvlan on Linux. On Windows it includes nat, overlay, transparent, and l2bridge.
Third parties can also write Docker network drivers, which they call remote drivers. Each driver is responsible for the actual creation and management of all resources on the network.
To meet the demands of complex, highly fluid environments, libnetwork operates with multiple network drives active at the same time. This means that Docker environments can be heterogeneous networks.
(2) Single-host bridging network
The simplest type of Docker network is a single-host bridged network (Layer 2 bridged network). Docker on Linux creates single-host bridged networks with built-in bridge drivers. The figure below shows two Docker hosts with the same local bridging network called "mynet." Although the networks are identical, they are separate and isolated networks, meaning containers cannot communicate directly because they are on different networks.
Every Docker host has a default single-host bridging network called "bridge" on Linux and "nat" on Windows, to which all new containers will attach by default. Unless rewritten with the--network flag. You can use the command docker network ls as follows:
With docker network inspect, you can view network details.
On Linux hosts, Docker networks built with bridge drivers are based on battlehardened linux bridge technology, which has been in the Linux kernel for more than 15 years. This means high performance and extreme stability. It also means that standard Linux toolkits can be used. For example, ip link show docker0.
Docker default "bridge" network and Linux kernel "docker0" bridge relationship is as follows:
Since all newly created containers are registered in the embedded Docker DNS service, other containers in the same network can be resolved by name.
Note: The default bridged network in Linux does not support name resolution via Docker DNS services, all other user-defined bridged networks do.
Single-host bridging networks are only used for local development and very small applications. Because only a single container can bind to any port on the host. This means that no other containers will use port 5000 on the host.
(3) Multi-host overlay network
The next section will focus on multihost overlay networks, so I'll only briefly cover them here.
Overlay networks are multi-hosted. They allow a single network to span multiple hosts, allowing containers on different hosts to communicate at two layers. They are well suited for container-container communication, including container-only applications.
Docker provides a native driver for overlay networks. This makes creating them very simple, just adding the--d overlay flag after the docker network create command.
4) Connecting to an existing network
The ability to connect boxed apps to external systems and physical networks is important. A common example is partially boxed apps, where boxed parts need a way to communicate with unboxed parts running on external physical networks and VLANs. With this in mind, built-in MAC VLAN networks were created. By giving each container their own MAC and IP address, containers are made first-class citizens already existing on the network.
On the positive side, MAC VLAN performance is good because it does not require port mapping or additional bridging to connect container interfaces through host interfaces. Conversely, however, it requires the host NIC to be in promiscuous mode, which is not allowed on most public cloud platforms. MACVLAN is therefore ideal for your own enterprise data center network, but does not work in the public cloud.
Let's go deeper with the help of subordinate pictures and hypothetical examples.
Now, we add a Docker host, as follows:
We now have a requirement to insert a container (app service) into VLAN100. In doing so, we create a new Docker network using macvlan. Macvlan, however, needs us to tell it something about the web. Such as subnet information, gateways, IP ranges that can be assigned to containers, interfaces or subinterfaces used by hosts.
Create a MAC VLAN network called "macvlan100" that will connect containers to VLAN 100. The commands are as follows:
docker network create -d macvlan --subnet=10.0.0.0/24 --ip-range=10.0.00/25 --gateway=10.0.0.1 -o parent=eth0.100 macvlan100
After the above command is executed, it is as follows:
MAC VLANs use standard Linux subinterfaces, and you can tag them with the IDs of the VLANs you are connecting to. In the example above, we are connecting to VLAN 100, so we label the subinterface with.100(etho.100).
We can also use the--ip-range flag to tell MAC VLAN networks which subsets of IP addresses can be assigned to containers. It is important that this range of addresses be reserved for Docker and not be used by other nodes or DHCP servers.
Execute the command: docker container run -d --name mactainer1 --network macvlan100 alpine sleep 1d, and the network structure is as follows:
Note: The underlying network VLAN 100 above does not see any MAC VLANs, it only sees containers with MAC and IP addresses.
We now have a MAC VLAN that connects a new container to an existing VLAN. Docker MACVLAN drivers are built on top of the Linux kernel of the same name. It supports VLAN trunk functionality. This means that we can create multiple MAC VLAN networks, connecting containers on the same Docker host as follows.
(V) Service discovery
In addition to the core network, libnetwork also provides important network services.
Service discovery allows all containers and Swarm services to locate each other by name. The only requirement is that they are on the same network. This leverages Docker's embedded DNS service, which also provides a DNS resolver for each container. As shown below, the c1 container pings the c2 container by name.
(vi) Ingress Network
Swarm supports two publishing modes that allow services to access hosts outside the cluster: Ingress mode (default) and Host mode.
Services published via ingress mode can be accessed by any node in the swarm-even if the node is not running a copy of the service. Services published via host mode can only be accessed via nodes running replicas of the service. The figure below shows the difference between the two models.
Ingress mode is a common mode. It uses a 4-layer routing mesh called Service Mesh or Swarm Mode Service Mesh, and the following figure shows the basic flow of external requests to services exposed in ingress mode.
At this point, the study of "what is the principle of Docker Network" is over, hoping to solve everyone's doubts. Theory and practice can better match to help everyone learn, go and try it! If you want to continue learning more relevant knowledge, please continue to pay attention to the website, Xiaobian will continue to strive to bring more practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.