Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Understanding of the DockerSwarm of docker (27)

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Original articles, welcome to reprint. Reprint please indicate: reproduced from IT Story Association, thank you!

Original link address: "Advanced articles" understanding of the DockerSwarm of docker (27)

This time, let's learn about dockerSwarm, what is dockerSwarm.

What is docker Swarm?

Product background

Using the docker process, ssh to a server and run docker commands to run native docker services. With the development of docker, more and more services want to run in docker containers. It is very difficult to manage containers on each ssh host one by one, and our applications also need to be highly available and avoid a single point of failure. The existing capabilities of docker have been very difficult to meet this demand. In this context, the docker community generates dockerSwarm projects for classes.

Concept

What is the term swarm,swarm? the word swarm means the clustering behavior of animals. For example, our common swarms of bees, fish and wild geese flying south can all become swarm,swarm projects, that is, multiple docker instances are gathered together to form a large docker instance to provide cluster services. At the same time, this cluster provides all api, and users can use docker clusters like docker instances.

Yesterday and today

Docker swarm is a separate project before 1.12, which needs to be downloaded separately. After 1.12, the project was merged into docker and became a sub-project of docker. Currently, docker is the only native management tool that supports docker clusters.

Architecture diagram of docker swarm

In the following figure, you can see an architectural diagram of the docker swarm management docker. In the past, the docker command line was used for the docker host, and then went to this machine to control the host on this machine separately. With swarm, the client command was for the docker cluster. Its command is almost equivalent to the native command of docker, it sends the command to swarm,swarm and chooses to send a node to the real execution. Swarm realizes the control of docker through the remote API of docker itself.

The following picture, this picture and the above picture describe the same thing, but it exposes more details. The big box above and the small box below represent a server, physical machine or virtual machine. From above, we say, above is a manager node of swarm. Managing this worker node, you can see how many cpu, how much memory, each service running above, and the status of each service. For example, their tags, their health status, Manager manages the life cycle of each node, such as join a node, offline a node, manager also manages the life cycle of each service, service deployment, update, stop, delete, Node node is relatively simple to run docker daemon, because after 1.12, swarm has been integrated into docker itself, developing a remote api to schedule manager nodes. To run specific services and containers, let's take a look at how the service deployment process is and how it is reflected in such an architecture.

The construction of environment

Think about the mesos you learned before, you need to install docker,Marathon,zookeeper first, join us now there are five liunx servers, each of which is equipped with docker, select one as manager, execute the first command in the figure below, and then print out a token as the credential of dockerSwarm, and then execute the second command under each worker node, indicating that to join the cluster, you only need token and the ip and port number of the corresponding manager node. The cluster environment has been built.

How to deploy

The client initiates the docker command in two ways

Ssh directly to the manager node and execute the docker command. Through remote access, calling the docker command on manager through Remote API, this is the second way we draw the picture.

Docker Client is outside the manager node. If docker service create is executed, it will first accept this command through docker Deamon and send it to the Scheduler module. The Scheduler module mainly implements the scheduling function and is responsible for selecting the optimal node, which contains two sub-modules. Fiter and Strategy,Fiter are obviously filter nodes to find out the nodes that meet the conditions (if there are enough resources, the node is normal). Strategy is to filter out and select the best node (compare and select the node with the most remaining resources, or find the node with the least remaining resources). Of course, Fiter and Strategy can be customized separately. The middle Cluster is an abstract worker node cluster, which contains the information of each node in the Swarm node. The Discovery on the right is an information maintenance module, such as Label Health. Cluster finally calls the api of the container to complete Liu Ercheng started by the container.

Scheduling module

When creating the service, the user selects the optimal node, and the management of selecting the optimal node is divided into two stages.

Filtering and Policy

Filter

Constraints

Constraint filter, according to the current operating system type, kernel version, storage type of constraints on indicators, can also be customized constraints. When the current system starts, you can specify the features of the current machine through label and filter them out through Constraints.

Affinity

Affinity filter, which supports container affinity and image affinity, such as an application, DB container and web container together, can be implemented through this

Dependency

Dependency filters, link, etc. Dependency will put these containers on the same node, and those with dependency management will put the created containers and dependent containers on the same node.

Health filter

The health filter filters according to the health status of the nodes to remove the problematic nodes.

Ports filter

Port filter, according to the use of the port filter, such as a 8080 port on a host is occupied, some hosts are not occupied, will choose those hosts that are not occupied.

Strategy

Binpack

In the same case, the node with the most resources is used, and this strategy allows containers to be gathered together.

Spread

In the same case, the node with the least resources will be used, and the container can be evenly distributed on each node through this strategy.

Random

Randomly select a node.

Service discovery

It's a little complicated, according to the scene.

Ingress

Based on the virtual network on the physical network, the upper application of Swarm no longer depends on the physical network, and can keep the physical network below unchanged. The old man can understand that the network itself involves too many things. I should also have heard of the network engineer. Since this position is certain that this is not so easy to learn, it will not be explained in detail here.

PS: suppose you run a nginx service with two instances, nginx1 and nginx2, and the port in the container is 80 and the port in the host is 8080. These two containers run on node2 and node3, respectively. See, although node1 is not running an instance, it still has port 8080 listening. A cluster can be accessed on all worker nodes, and you can access it by entering its ip and port 8080. Or set up a load balancer External LB, responsible for polling access to each of the above port 8080, why can you access our services on each node? After each service starts, all nodes will update their VIP LB to establish a relationship between the new service port number and the service information. VIP LB is a load balancer based on virtual IP. VIP LB can parse to the real IP through virtual IP, and then access the service.

Ingress+ link

For the type docker-compose, you can create a set of containers through the docker-compose.yml file, which were previously accessed through link, but this is actually the type of docker-compose link network.

PS: that is, there is a link scenario on top of Ingress, which can be accessed through link and does not need the network of hosts. How can link be implemented? if it is easy to let one container link to another container, after all, they are on a host, and a service link to another service is not that simple. It may contain a container or many containers, or it may run on a machine. It may also be distributed on multiple machines. How can we access each other by name? this uses the container's dns, where the nginx service depends on the tomcat service. There are two instances of nginx and one instance of tomcat. All nginx containers will parse tomcat, and parsing it to tomcat's VIP,VIP is responsible for load balancing. This is the principle, and link cannot be accessed externally. Link is only suitable for scenarios within the swarm cluster.

Custom network

To use a custom network, the first step is to create a network. All networks can be connected to each other by name, without the need for link operation. As long as they are connected to each other in this network, they can all go by name. At the bottom, it is type with link. Parse the name of the application through dns. Then load balancing is done in the form of VIP LB.

# create custom network docker network create-- driver=overlay-- attachable mynet# create service docker service create-- p 80:80-- network=mynet-- name nginx nginx

Ingress supports external access, and Ingress+ link and custom networks can only be accessed between containers. Service orchestration Service deployment & Service Discovery (mentioned above) Service Update-docker service update Service scaling-docker service scaleSwarm

Externally presented by Docker API interface

The benefits can be smoothly switched to docker swarm. There is basically no need to change the existing system

Easy to use and low cost of learning

The previous experience of docker can be inherited.

Lightweight, save resources

Focus on the management of docker clusters. Plug-in mechanism swarm modules are abstracted out of the corresponding API, can be customized according to their own characteristics.

Perfect support for docker command parameters

With the simultaneous release of docker, the new features of docker can be reflected on dockerSwarm.

PS:docker Swarm knows almost all about it. Next time, start building the environment of docker swarm.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report