In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Basic concepts:
Swarm introduction:
Swarm is a relatively simple set of tools released by Docker in early December 2014 to manage Docker clusters. It turns a group of Docker hosts into a single virtual host.
Swarm uses the standard Docker API interface as its front-end access entrance, in other words, various forms of Docker Client (dockerclient in Go, docker_py,docker, etc.) can communicate directly with Swarm. Swarm is almost entirely developed in the GE language, and the Swarm0.2 version adds a new strategy to schedule containers in the cluster to propagate them on available nodes, as well as support more Docker commands and cluster drivers. Swarm deamon is just a Scheduler and router. Swarm does not run the container itself, it just accepts requests from docker clients and schedules suitable nodes to run the container, which means that even if Swarm dies for some reason, the nodes in the cluster will run as usual, and when Swarm resumes operation, it will collect and rebuild cluster information.
The request sent by the docker client dispatches the appropriate node to run the container, which means that even if the Swarm dies for some reason, the nodes in the cluster will run as usual, and when Swarm resumes running, it will collect and rebuild the cluster information.
Features of Swarm clusters:
There can be all manager in the cluster, but not all worker.
Node: node.manager: manager, manager worker: worker service: a task that defines the commands that the worker side receives manager. Prepare the environment:
Three hosts (centos7):
The docker version:12 version is above.
Node01:172.16.1.30
Node02:172.16.1.31
Node03:172.16.1.32
(1) modify the host name first:
[root@sqm-docker01 ~] # hostnamectl set-hostname node01 [root@sqm-docker01 ~] # hostnamectl set-hostname node02 [root@sqm-docker01 ~] # hostnamectl set-hostname node03
(2) the three hosts are configured with domain name resolution:
[root@node01 ~] # vim / etc/hosts
(3) set secret-free login:
/ / enter by default to generate a key:
/ / copy the key to node02 and node03: [root @ node01 ~] # ssh-copy-id node02 [root@node01 ~] # ssh-copy-id node03// has secret-free login, copy the host domain name resolution file to the other two nodes: [root@node01 ~] # scp / etc/hosts root@172.16.1.31:/etc/hosts [root@node01 ~] # scp / etc/hosts root@172.16.1.32:/etc/hosts project operation: 1) initialize the cluster:
Specify the current host as the creator of the cluster (leader)
2) since the command to join the cluster (copy it) has been prompted when initializing the cluster, then add the node02 and node03 hosts to the cluster:
/ / check whether the node is added to the cluster:
Tip: only manager has permission to view this step.
* # if other nodes need to join the cluster and need to specify identity, when you forget the command generated when initializing the cluster, you can execute the following command to view:
Note: only the manager side has permission to view it.
# check to join the cluster as worker [root@sqm-docker01 ~] # docker swarm join-token worker
# check to join this cluster as manager [root@sqm-docker01 ~] # docker swarm join-token manager
(3) configure the web Ui interface:
# pull image:
Use a local mirror package, so import it directly:
[root@node01] # docker load-- input myvisualizer.tar
# Operation service: [root@node01 ~] # docker run-d-p 8000 HOST=172.16.1.30-e PORT=8080-v / var/run/docker.sock:/var/run/docker.sock-- name visualizer dockersamples/visualizerHOST specifies the address as the local address
# # visit the web web page interface:
URL: http://172.16.1.30:8000/
You can see the three nodes in the cluster.
(4) build swarm cluster network (overlay network)
I still remember that in the previous blog post on building overlay, you need to deploy consul (data center) when building overlay network, but now in the environment of swarm cluster, it comes with the function of consul service by default, so you can create overlay network directly.
# # create overlay network: [root@node01 ~] # docker network create-d overlay-- attachable docker
Note: when creating an overlay network, if you do not add-- attachable, then this network cannot be applied to the container.
# # run a container on node01 and node02 to test whether you can communicate properly: [root@node01 ~] # docker run-itd-- name test1-- network docker busybox [root@node02 ~] # docker run-itd-- name test2-- network docker busybox
(5) build a private repository (shared image)
The purpose of building a private warehouse is to enable everyone to share the images in the private warehouse in a cluster, to easily deploy services, and to deploy their own private repositories in the company for security reasons.
/ / deploy as an official registry image: [root@node01 ~] # docker run-d-- name registry-- restart=always-p 5000 registry:latest// modify docker configuration file: [root@node01 ~] # vim / usr/lib/systemd/system/docker.service
The modifications are as follows:
/ / restart the docker service: [root@sqm-docker01 ~] # systemctl daemon-reload [root@sqm-docker01 ~] # systemctl restart docker.service// copies the configuration file directly to node02 and node03 [root @ sqm-docker01 ~] # scp / usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service [root@sqm-docker01 ~] # scp / usr/lib/systemd/system/docker.service node03:/usr/lib/systemd/system/docker.service
After copying the past, you need to reload the process in node02 and node03 and restart the docker service.
# after deploying a private repository, we'd better test it:
Upload apache image to private repository on node01: [root@node01 ~] # docker tag httpd:latest 172.16.1.30:5000/myhttpd [root@node01 ~] # docker push 172.16.1.30:5000/myhttpd
Pull on other nodes:
[root@node02 ~] # docker pull 172.16.1.30:5000/myhttpd
[root@node03 ~] # docker pull 172.16.1.30:5000/myhttpd
(6) docker swarm cluster configuration service service # # publish task, and create 2 [root@node01 ~] # docker service create-- replicas 2-- name web01-p 80:80 172.16.1.30:5000/myhttpd:latest
-- replicas: a copy based on which you can replicate. -- replicas 1 means only one container is required.
/ / View service: [root@node01 ~] # docker service ls
/ / check which node in the cluster the service is running on:
It will refer to the various performance configurations and workload of the node to achieve the allocation to which node, in order to achieve load balancing.
In addition to viewing all kinds of information about service from the command line, you can also view it through the web web page:
Release the second service:
[root@node01] # docker service create-- replicas 4-- name web02-p 80 172.16.1.30:5000/myhttpd # randomly generated port
You can see that it will still be evenly distributed on each node.
(7) capacity expansion and reduction of service service:
The reasons for capacity expansion and reduction are clear. When a node is under too much pressure, or when the server configuration is insufficient to support the services it is running, you need to reduce the number of containers to ensure stable operation and capacity expansion? When the server of a node is idle, it does not affect to assign a few more services to run.
1) capacity expansion: [root@node01 ~] # docker service scale web01=6
View it on the web page:
2) scale down: [root@node01 ~] # docker service scale web02=1
View it on the web page:
(8) set manager not to participate in the work:
In a cluster, the best state is to specify that the manager node does not participate in the work and let the node node do the work, just like in a company, it is impossible for the boss to work, right?
# # specify that the manager node does not participate in the work: [root@node01 ~] # docker node update-- availability drain node01
As you can see from the figure above, manager is no longer working, so the running container is already working on node01 and node02 by default.
(9) specify the location of the copy running node:
If there is a need, all the published services need to be run on the same server, how can it be implemented?
Method 1:
1) define the tag: [root@node01 ~] # docker node update-- label-add disk=max node03 # # define the tag to node 3. 2) Publishing Service: [root@node01 ~] # docker service create-- name test-- replicas 5-p 80-- constraint 'node.labels.disk==max' 172.16.1.30:5000/myhttpd
To see if the assignment was successful:
Method 2:
Specify the node hostname directly.
[root@node01] # docker service create-- replicas 5-- name testname-- constraint 'node.hostname==node02'-p 80 172.16.1.30:5000/myhttpd
Update of service service & rollback: 1, update of service:
Update the above services (test) to version 2.0.
[root@node01 ~] # docker tag 172.16.1.30:5000/myhttpd:latest 172.16.1.30:5000/myhttpd:v2.0 [root@node01 ~] # docker push 172.16.1.30:5000/myhttpd:v2.0
[root@node01] # docker service update-- image 172.16.1.30:5000/myhttpd:v2.0 test
Note: when the version of the service is upgraded, its original version will still be retained.
And in the process of updating, the default is to update one by one in turn, when one update is completed, and then update the next.
2. Service custom update:
Update the above services to version 3.0.
[root@node01 ~] # docker tag 172.16.1.30:5000/myhttpd:v2.0 172.16.1.30:5000/myhttpd:v3.0 [root@node01 ~] # docker push 172.16.1.30:5000/myhttpd:v3.0
[root@node01] # docker service update--image 172.16.1.30:5000/myhttpd:v3.0-- update-parallelism 2-- update-delay 1m test
Parameter explanation:
-- update-parallelism 2: sets the number of copies for parallel (simultaneous) updates.
-- update-delay 1m (m (minutes s (seconds) h (hours) d (days) w (weeks)): specifies the time interval for rolling updates.
3. Rollback operation of the service:
When we perform a rollback operation, the default is to roll back to the version of the previous operation, which can only be rolled back between the two versions, not continuously.
[root@node01] # docker service update-- rollback test
Go to the web page to view it more intuitively:
Rollback succeeded.
# # if the test is rolled back again, which version will it be rolled back to?
[root@node01] # docker service update-- rollback test
You can see that it rolls back to the version before the first rollback. It is proved that it cannot be rolled back continuously.
Summary of docker swarm cluster commands:
/ / initialize the cluster: docker swarm init-- advertise-addr native ip address
/ / View node information: docker node ls
/ / join the cluster as worker: docker swarm join-token worker
/ / join the cluster as manager: docker swarm join-token manager
/ / upgrade the node to manager:docker node promote node2
/ / downgrade the node to worker:docker node demote node2
/ / detach from the cluster: docker swarm leave
/ / Delete a node (it can only be deleted if it is separated from the cluster): docker node rm node2
/ / forcibly delete the cluster: docker swarm leave-f (must be forcibly deleted)
/ / View service: docker service ls
/ / check on which node the service is running on (random): docker service ps task name
/ / publish a task:
Docker service create-- replicas 2-- name test-p 80 httpd:latest
/ / Delete all tasks (containers): docker service rm. (task name)
Or: docker service ls | xargs docker service rm
/ / expand the task: docker service scale service name = 2
/ / scale down: docker service scale service name = 1
/ / set a node to work: docker node update-- availability active node name
/ / set a node not to work temporarily (suspend): docker node update-- availability pause node name
/ / set a node not to participate in the work: docker node update-- availability drain node name
/ / Update service: docker service update-- image 172.16.1.30:5000/my_nginx:3.0 (image name) test2 (service name)
/ / Custom updates:
Docker service update--image 172.16.1.30:5000/my_nginx:4.0-- update-parallelism 2-- update-delay 1m test2
/ / Service rollback: docker service update-- rollback bdqn2
-this is the end of this article. Thank you for reading-
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.