In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Swarm introduction
Swarm is a relatively simple set of tools released by Docker in early December 2014 to manage Docker clusters. It turns a group of Docker hosts into a single virtual host. Swarm uses the standard Docker API interface as its front-end access entrance, in other words, various forms of Docker Client (dockerclient in Go, docker_py, docker, etc.) can communicate directly with Swarm. Swarm is almost entirely developed in the Go language, and the Swarm0.2 version adds a new strategy to schedule containers in the cluster to propagate them on available nodes, as well as support more Docker commands and cluster drivers. Swarm deamon is just a Scheduler and router. Swarm does not run the container itself, it just accepts requests from docker clients and schedules suitable nodes to run the container, which means that even if Swarm dies for some reason, the nodes in the cluster will run as usual, and when Swarm resumes operation, it will collect and rebuild cluster information.
Docker's Swarm mode integrates many tools and features, such as: rapid deployment of services across hosts, rapid expansion of services, and integration of cluster management into the docker engine, which means whether third-party management tools can not be used.
Decentralized design, declarative service model, extensibility, state coordination processing, multi-host network, distributed service discovery, load balancing, rolling update, security (communication encryption), and so on.
Cluster characteristics:
1) Cluster management integrated into Docker Engine: using the built-in cluster management function, you can create a Swarm cluster directly through the Docker CLI command, and then deploy application services without the need for other external software to create and manage a Swarm cluster.
2) decentralized design: the Swarm cluster contains two types of Node, Manager and Worker, and any type of Node can be deployed directly based on Docker Engine. And during the operation of the Swarm cluster, you can make any changes to expand and reduce the capacity of the cluster, such as adding Manager Node, such as deleting Worker Node, and doing these operations do not need to pause or restart the current Swarm cluster service.
3) declarative service model: in the implemented application stack, Docker Engine uses a declarative way to define the state of various services expected. For example, an application service stack is created: a Web front-end service, a back-end database service, and a Web front-end service depends on a message queue service.
4) coordinate the consistency between the expected state and the actual state: the Swarm cluster Manager Node will constantly monitor the status of the cluster, and coordinate the state of the cluster to keep our expected state consistent with the actual state. For example, if an application service is started and the copy of the service is specified as 10, 10 Docker containers will be started to run. If some Worker
If the 2 Docker containers running on Node fail, Swarm Manager will select other available Worker Node in the cluster and create 2 service copies, so that the actual number of Docker containers running will remain the same as the expected 10.
5) Multi-host network: Swarm Manager assigns a unique DNS name to each service in the cluster to load balance the running Docker container. You can query the status of Docker containers running in the Swarm cluster through DNS Server built into Swarm.
6) load balancing: within Swarm, you can specify how to distribute service containers (ServiceContainer) among Node to achieve load balancing. If you want to use a load balancer outside the Swarm cluster, you can expose the port of the service container to the outside.
7) Security policy: Node within the Swarm cluster enforces two-way authentication based on TLS, and secure encrypted communication occurs both on a single Node and between Node in the cluster. You can choose to use a self-signed root certificate or a custom root CA (Root CA) certificate.
8) Rolling update: for scenarios where services need to be updated, we can deploy updates incrementally on multiple Node. Swarm Manager supports the deployment of multiple services on multiple Node by setting a delay interval using Docker CLI, which can be controlled very flexibly. If a service update fails, the
Pause subsequent update operations and roll back to the previous version. Swarm architecture
As a tool for managing Docker clusters, Swarm needs to be deployed first, and Swarm can be deployed on a single node. A Docker cluster is required, and each node on the cluster has Docker installed.
For a specific Swarm architecture diagram, you can refer to the following figure:
The most important processing part of the Swarm architecture is naturally the Swarm node, and the object managed by Swarm is naturally composed of multiple Docker Node, while the one responsible for sending requests to the Swarm is Docker Client.
Key concepts of swarm
1) Swarm
Cluster management and orchestration uses SwarmKit embedded in the docker engine, which can be used during docker initialization
Start swarm mode or join an existing swarm
2) Node
A node (node) is an instance of the Docker engine that has been added to swarm. When deploying to the cluster, you will submit the service definition to the management node, then the Manager management node dispatches tasks to the worker node, the manager node also performs the orchestration and cluster management functions to maintain the state of the cluster, and the worker node receives and executes the tasks from the manager node. Usually, the manager node can also be a worker node, and the worker node reports the current status to the manager node
3) Service (Service)
A service is the definition of a task to be performed on the worker node, which is executed on the worker node. When you create the service, you need to specify the container image.
4) Task (Task)
A task is a command executed in a docekr container, and the Manager node assigns tasks to the worker node according to the specified number of copies of the task.
Main management command sets:
Docker swarm: cluster management. The subcommands are init, join, leave, update. (docker swarm-- help view help)
Docker service: service creation. The subcommands are create, inspect, update, remove, tasks.
(docker service--help View help)
Docker node: node management. The subcommands are accept, promote, demote, inspect, update,tasks, ls, rm. (docker node-- help view help)
1) manager node management node: perform cluster management functions, maintain the status of the cluster, and elect a
The leader node performs the scheduling task.
2) worker node work node: receive and execute tasks. Participate in container cluster load scheduling, which is only used to host task.
3) service service: a service is the definition of executing tasks on the work node. Create a service that specifies the image used by the container and the commands that the container runs. Service is the description of the task running on worker nodes, and the description of service includes which docker image to use and what commands to execute in the container that uses it.
4) task task: a task contains a container and its running commands. Task is the executing entity of service, and task starts the docker container and performs tasks in the container.
The way swarm works
1 、 Node
Starting with Docker Engine version 1.12, the swarm pattern was introduced to create one or more Docker Engines clusters, called swarm. A swarm contains one or more nodes: physical or virtual machines running in Docker Engine1.12 or later swarm mode. There are two types of nodes: managers and
Workers .
Manager node
The manager node handles cluster management tasks:
Maintain cluster status scheduling service and improve HTTP API service of swarm mode
The swarm manager uses Raft to maintain a consistent internal state across the swarm cluster. For testing purposes, you can run only one swarm management node. If the single manager's swarm goes offline, the service will still run, but you need to create a new cluster to restore it.
To take advantage of the fault tolerance of swarm mode, Docker recommends that you create an odd number of management nodes based on your requirements for high availability. When you have multiple management nodes, you can recover from the failed management node without downtime. Maximum swarm of three management nodes allow one management node downtime five management nodes swarm maximum allow two management nodes downtime N management nodes swarm maximum allowed (NMUE 1) / 2 management nodes downtime Docker recommends the creation of a maximum of 7 management nodes
Worker node
The Worker node is an instance of Docker Engine whose sole purpose is to run the container. Worker nodes do not participate in Raft distribution state, make scheduling decisions or provide HTTP API services in swam mode.
You can create a swarm for a single management node, but you can't have only one Worker node without a management node.
By default, all management nodes are also Worker nodes. In a cluster with a single administrative node, you can run the command docker service create, and then the scheduler will put all tasks to be executed locally. If you want to prevent the scheduler from assigning tasks to the management node of a cluster with multiple nodes, you can set the status of the management node to Drain. The scheduler will not continue to distribute tasks to these nodes for execution.
Change the role
You can promote a worker node to a management node by executing docker node promote. For example, when your management node goes offline, you may want to promote the worker node to the management node.
2. Service (service, task, container)
When you deploy the service to the swarm,swarm manager, receive your definition of the expected state of the service. It then schedules one or more copy tasks for the nodes that serve you in the swarm. These tasks run independently of each other on the nodes of the swarm.
For example, suppose you want to load balance three HTTP server instances. The chart below shows three replicas of the HTTP server.
Each of the three HTTP instances is a task in swarm.
A container is an isolated process. In the swarm schema model, each task invokes a container. The task contains a container. Once the container is running properly, the scheduler recognizes the tasks associated with the container as online. Otherwise, if the container stops or there is an exception, the task is displayed as terminated.
3. Tasks and scheduling
A task is an atomic unit scheduled within a swarm. When you declare a service with a desired state by creating or updating a service, the scheduler achieves the desired state by scheduling tasks. For example, you specify that a service keeps three HTTP instances running at all times. The scheduler creates three tasks. Each task runs a container. The container is the instantiation of the task. If a HTTP container fails to stop and the task is marked as failed, the scheduler creates a new task to generate a new container.
The task is an one-way mechanism. It unidirectionally executes a system state, assigned,prepared, running, etc. If a task fails, the scheduler deletes the task and its container, and then creates a new task to replace it.
The following diagram shows how the swarm mode receives service creation requests and schedules tasks to the worker node.
4. Replica and global services
There are two types of service deployment, replica and global.
For copy services, you can specify the number of tasks to run. For example, you decide to deploy three copies of HTTP instances, each providing the same content.
A global service runs the same task on each node. You do not need to specify the number of tasks in advance. Each time you add a node to the swarm, the coordinator creates a task, and the scheduler assigns the task to the new node.
For example, you want to run monitoring agents, virus scanners, etc., on each node.
The following chart shows the service marked with three copies and a global service marked in gray.
5. Swarm scheduling policy Swarm calculates the most suitable node to run the container according to the specified policy when the scheduler node (leader node) runs the container. The supported strategies are: spread, binpack, random.
1) Random
As the name implies, a Node is randomly selected to run the container, which is generally used for debugging. The spread and binpack policies calculate the container that should be run based on the number of available CPU, RAM and running containers of each node.
The node of.
2) Spread
Under the same conditions, the Spread policy selects the node that runs the least number of containers to run the new container, and the binpack policy selects the machine that runs the most concentrated container to run the new node. Using the Spread strategy will make the containers run evenly among the nodes in the cluster, and once a node dies, only a small number of containers will be lost.
3) Binpack
The Binpack strategy maximizes the avoidance of container fragmentation, that is, the binpack strategy leaves the unused nodes to the containers that need more space as much as possible, and runs the containers on one node as much as possible.
Deployment case
The lab environment of this case
The lab environment for this case is shown in the table. Docker Swarm system environment
Host operating system hostname / IP address main software and version
Server Centos 7.5-x86_64 manager/192.168.5.210 Docker-ce 18.06.0-ce
Server Centos 7.5-x86_64 worker01/192.168.5.211 Docker-ce 18.06.0-ce
Server Centos 7.5-x86_64 worker02/192.168.5.212 Docker-ce 18.06.0-ce
2. Case requirements
Deploy Docker Swarm clusters
Case realization train of thought
1) prepare the Docker Swarm deployment environment.
2) deploy the Docker Swarm cluster.
4. Case implementation
Docker Swarm system environment preparation
Modify hostname
[root@localhost] # hostnamectl set-hostname manager / / 192.168.5.210
[root@localhost ~] # hostnamectl set-hostname worker01 / / 192.168.5.211
[root@localhost] # hostnamectl set-hostname worker02 / / 192.168.5.212
2. Modify the / etc/hosts file for name resolution, and each host needs to modify it.
[root@manager ~] # cat / etc/hosts
128.127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.5.210 manager
192.168.5.211 worker01
192.168.5.212 worker02
3. Modify the DNS address of the host to ensure access to the Internet
[root@manager] # ping-c 1 www.baidu.com
PING www.a.shifen.com (119.75.216.20) 56 (84) bytes of data.
64 bytes from 127.0.0.1 (119.75.216.20): icmp_seq=1 ttl=128 time=5.04 ms
-www.a.shifen.com ping statistics
1 packets transmitted, 1 received, 0 packet loss, time 0ms
Rtt min/avg/max/mdev = 5.045 ms 5.045 ms
4. Install docker and necessary software packages for each device.
[root@manager ~] # yum-y install wget telnet lsof vim
[root@manager ~] # yum install-y yum-utils device-mapper-persistent-data lvm2
Add domestic docker mirror source
[root@manager] # yum-config-manager-- add-repo
Https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Install the latest version of docker and launch
[root@manager ~] # yum-y install docker-ce
[root@manager ~] # systemctl enable docker
[root@manager ~] # systemctl start docker
Configure Image Accelerator
Sudo tee / etc/docker/daemon.json
Verify: Service converged
[root@manager] # docker service logs-f hello
Hello.1.k25qwbkbb0c5@worker01 | 64 bytes from 123.125.115.110: seq=212
Ttl=53 time=1.277
Ms
Hello.2.6r381vq3jeyv@worker02 | 64 bytes from 123.125.115.110: seq=212 ttl=53
Time=1.295ms
……
A service named hello is created from the Docker mirror busybox, where the specified number of service replicas is 2
That is, start a Docker container to run the service, and you can view the results of the ping through logs. Check when
Former already
For all application services started after deployment, execute the following command:
[root@manager ~] # docker service ls
The execution result is as follows:
You can also query the details of the specified service and execute the following command:
[root@manager ~] # docker service ps hello
View the result information as follows:
In the above information, the hello application service is deployed on worker01 and worker02 Node.
Also
Contains their corresponding current state information. At this point, you can also execute the docker ps command in the
Manager Node
View the currently launched Docker container on the
[root@worker01] # docker ps-a
Show service details
Easy to read the display executes the following command:
[root@manager] # docker service inspect-- pretty hello
Displayed in json format, as follows:
[root@manager ~] # docker service inspect hello
2.2. Expansion and reduction of service capacity
Docker Swarm supports the expansion and reduction of service capacity. Swarm sets the service type through the-- mode option, which provides
Two
Different modes:
One is replicated, which can specify the number of service Task (that is, several redundant copies need to be created)
this
It is also the type of service used by Swarm by default.
The other is global, which creates a service on each Node of the Swarm cluster.
[grammar]
Total docker service scale Service ID= Service Task
For example, expand the previously deployed 2 replicas of the hello service to 3 replicas and execute the following command:
[root@manager ~] # docker service scale hello=3
Hello scaled to 3
Overall progress: 3 out of 3 tasks1/3: running
[= >]
2/3: running
[= >]
3/3: running
[= >]
Verify: Service converged
Further check the status information of each copy of myredis through docker service ps hello, as follows:
The downsizing service only needs to make the number of replicas less than the number of replicas owned by the current application service, and the copies greater than the specified number of downsizing copies will be deleted.
2.3. Delete a service
To delete a service, simply execute the following command on Manager Node:
[grammar]
Docker service rm service name
If the hello application service is deleted and the docker service rm hello is executed, all copies of the application service hello are
Will be deleted.
[root@manager ~] # docker service rm hello
2.4. Scrolling update
Create a Redis version 3.0.6 by executing the following command on Manager Node
[root@manager] # docker service create\ >-- replicas 3\
-- name redis\
-- update-delay 10s\
Redis:3.0.6
Lbnu84tngtncjodaq9x6d3gdn
Overall progress: 3 out of 3 tasks
1/3: running
[= >]
2/3: running
[= >]
3/3: running
[= >]
Verify: Service converged
The rolling update policy is configured when the service is deployed.
The-- update-delay parameter configures the delay time between update service tasks or between a set of tasks. You can configure the delay time in seconds, m, and h units. So 10m30s means a delay of 10 minutes and 30 seconds. By default, the scheduler performs one update task at a time. You can pass the-update-parallelism parameter to configure the maximum number of update service tasks that the scheduler can perform simultaneously. By default, when a single update task returns to the state of RUNNING, the scheduler schedules to continue other tasks until all task updates are complete. If a task returns FAILED at any time during the update task, the scheduler pauses the update. You can use-update-failure-action in docker service create or docker service update to control its behavior.
View the details of the service redis:
[root@manager] # docker service inspect redis-- pretty
Now start updating the redis container. The swarm manager applies updates to nodes according to the UpdateConfig policy:
[root@manager] # docker service update-- image redis:3.0.7 redis
Redis
Overall progress: 3 out of 3 tasks
1/3: running
[= >]
2/3: running
[= >]
3/3: running
[= >]
Verify: Service converged
By default, the steps for the scheduler to apply rolling updates are as follows:
Stop the first mission. Perform updates on stopped tasks. Start the container that has been updated. If the update task returns RUNNING, wait for a specified delay and then stop the next task. If the task returns FAIlED at any time, stop the update.
View updated Redis service details
Check the service and find that the previous version is also there, but it is stopped.
[root@manager ~] # docker service ps redis
If the update is paused, execute the docker service update command to restart the paused update, for example:
Docker service update redis
2.5 rollback version
[root@manager] # docker service update-- rollback redis
3. Add a custom Overlay network
Docker Engine's swarm mode natively supports overlay networks (overlay networks), so you can enable container-to-container networks. The overlay network in swarm mode includes the following functions:
You can attach multiple services to the same network. By default, service discovery assigns a virtual IP address (vip) and DNS name to each swarm service, so that service names can be used to connect to each other in the same network.
You can configure to use DNS round robin instead of VIP in order to use swarm overlay networks, you need to open the following ports between swarm nodes between enabling swarm mode:
TCP/UDP port 7946-for container network discovery
UDP port 4789-for container overlay network
Create an Overlay network on Manager Node and execute the following command:
[root@manager ~] # docker network create\
-- driver overlay\
-- subnet 10.0.9.0 Compact 24\
-- opt encrypted\
My-network
Dlu21qvmv1zoi2bnl60wptkro Node communication in swarm is encrypted by default. The optional-opt encrypted parameter enables additional encryption layers for their vxlan traffic between containers of different nodes.
The-subnet parameter specifies the subnet of the overlay network. When you do not specify a subnet, the swarm manager automatically selects a subnet and assigns it to the network. In some older kernels, including kernel 3.10, automatically assigned addresses may overlap with other subnets. This overlap can cause connectivity problems.
Perform docker network ls to view the network:
Swarm scope indicates that services deployed to swarm can use this network. When you create a service and attach it to a network, swarm simply extends the network to the node on which the service is running. On a worker node that is not running a service attached to the network, the network ls command does not show any networks.
To attach a service to an overlay network, pass the-network parameter when the service is created. For example, create a nginx service and attach it to a network called my-network:
[root@manager ~] # docker service create\
-- replicas 3\
-- name my-web\
-- network my-network\
Nginx
Xnjwd3sg67tddbee1r29mnezv
Overall progress: 3 out of 3 tasks
1Accord3: running [= >]
2/3: running
[= >]
3/3: running
[= >]
Verify: Service converged
View the information of the node where the service is located
[root@manager ~] # docker service ps my-web
View network details
[root@manager ~] # docker network inspect my-network
By default, when you create a service and attach it to a network, swarm assigns a VIP to the service.
VIP maps to DNS aliases based on the service name. DNS mapping information is shared between containers in the network through gossip, so containers in the network can access each other through service names.
View VIP:
[root@manager ~] # docker service inspect\
-- format=' {{json .Endpoint.VirtualIPs}}'\
My-web
[{"NetworkID": "dlu21qvmv1zoi2bnl60wptkro", "Addr": "10.0.9.5Univer 24"}]
If the outside wants to access this service, you can publish the port:
By default, the above published ports are all TCP ports. You can specify to publish a UDP port
For example: docker service create-- name dns-cache-p 53:53/udp dns-cache publishes the TCP8080 port of my-web
[root@manager] # docker service update-- publish-add 8080 my-web
My-web
Overall progress: 3 out of 3 tasks
1/3: running
[= >]
2/3: running
[= >]
3/3: running
[= >]
Verify: Service converged
View published ports:
[root@manager ~] # docker service inspect-- format=' {{json .Endpoint.Ports}}'
My-web
[{"Protocol": "tcp", "TargetPort": 80, "PublishedPort": 8090, "PublishMode": "ingress"}]
When you access port 8080 of any node, swarm's load balancer will route your request to an available container on any node. Routing mesh listens for published ports on all IP of the swarm node
You can configure an external load balancer to route requests to the swarm service. For example, you can configure HAProxy to route requests to a nginx service with published port 8080.
4. Create a data volume
[root@manager ~] # docker volume create web-test
Web-test
[root@manager ~] # docker volume ls
DRIVER VOLUME NAME
Local3c79787a32ee0c28d71022c854489ff95584c36c0ff745d861617e2a67f2314d
Local
8cec61172886947826aef4168807dba054a3fdeacf606c0e76e8cae4dd434250
Local
D7b8a0e8c250ea61be4051b2f25fef2d5ffebbee89f0424b7f168fa27d8f2189
Local web-test
Apply the data volume created above, as follows:
[root@manager] # docker service create-mount type=volume,src=webtest,dst=/usr/local/apache2/htdocs-replicas 2-name web08 httpd
Vnzbh778lma8qcbpc5kb3zu0o
Overall progress: 2 out of 2 tasks
1/2: running
[= >]
2/2: running
[= >]
Verify: Service converged
View the details of the data volume:
[root@manager ~] # docker volume inspect web-test
[
{
"CreatedAt": "2018-08-19T16:01:32+08:00"
"Driver": "local", "Labels": {}
"Mountpoint": "/ var/lib/docker/volumes/web-test/_data"
"Name": "web-test"
"Options": {}
"Scope": "local"
}
]
View the node where the service is located:
[root@manager ~] # docker service ps web08
ID NAME IMAGE NODE DESIRED STATE
CURRENT STATE ERROR PORTS
Mvevlguiv506 web08.1 httpd:latest worker02 Running
Running about a minute ago
C1o5qbvndynv web08.2 httpd:latest worker01 Running
Running about a minute ago
Log in to the worker01 host
Root@worker01 ~] # cd / var/lib/docker/volumes/web-test/_data/
[root@worker01 _ data] # mkdir test01 test02
[root@worker01 _ data] # docker exec-it eeda631495e3 bash
Root@eeda631495e3:/usr/share/nginx/html# ls / usr/share/nginx/html/
50x.html index.html test01 test02
From the verification results above, create several files under the directory of the local data volume, and after entering the container, find the directory where the queue should be, and the data still exists.
Create a local mount directory on the worker02 host using bind mount mount type data volumes.
[root@worker02 ~] # mkdir / opt/webroot/
Create two services for web09:
[root@manager _ data] # docker service create-- mount
Type=bind,source=/opt/webroot,target=/usr/local/apache2/htdocs-replicas 2--
Name web09 httpd
Ec20py2hqee1k2bldxav58be7
Overall progress: 2 out of 2 tasks
1/2: running
[= >]
2/2: running
[= >]
Verify: Service converged
Parameter target indicates the path in the container, and source represents the local hard disk path.
[root@worker02 ~] # cat / opt/webroot/index.html
Webroot
[root@worker02 ~] #
Enter the container for verification (note that the container must be on the worker012 host). The verification result is as follows.
Root@c76b532f4127:/usr/local/apache2# cat htdocs/index.html
Webrootroot@c76b532f4127:/usr/local/apache2#
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.