Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Installation and configuration of Docker swarm cluster

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

A brief introduction to Docker swarm

Swarm is a platform launched by Docker to manage docker clusters, which is almost entirely developed in go language. The code is open source in https://github.com/docker/swarm, which turns a group of Docker hosts into a single virtual host. Swarm uses the standard Docker API interface as its front-end access, in other words, various forms of Docker.

Client (compose,docker-py, etc.) can communicate with Swarm directly, and even Docker itself can easily integrate with Swarm, which greatly facilitates users to transplant the system based on single node to Swarm. At the same time, Swarm has built-in support for Docker network plug-ins, and users can easily deploy container cluster services across hosts.

Since Docker version 1.12.0, Docker Swarm has been included in the Docker engine (docker swarm), and the service discovery tool has been built in, so we don't need to configure Etcd or Consul for service discovery configuration as before.

There are three roles in Docker swarm:

Manager node: responsible for performing container orchestration and cluster management, keeping and maintaining swarm in the desired state. Swarm can have multiple manager node, and they will automatically negotiate and elect a leader to perform choreography tasks; on the contrary, there can be no manager node;Worker node: accept and execute tasks assigned by manager node, and the default manager node is also a work node, but you can set manager-only node to be responsible for choreography and management Service: used to define commands executed on worker

Note: in a Docker Swarm cluster, the role of each docker server can be manager, but it cannot all be worker, that is, it cannot be leaderless, and all hostnames participating in the cluster must not conflict.

II. Environmental preparation

Note:

Ensure time synchronization; turn off the firewall and SElinux (experimental environment); change the hostname; write host files to ensure domain name resolution Initialize the Swarm cluster [root@node01 ~] # tail-3 / etc/hosts 192.168.1.1 node01192.168.1.2 node02192.168.1.3 node03// all need to configure hosts files to achieve the effect of domain name resolution [root@node01 ~] # docker swarm init-- advertise-addr 192.168.1.1//--advertise-addr: specify the address to communicate with other node

The return information of the command, as shown in the figure:

① 's command in the figure: the command to join the swarm cluster as worker

② command: a way to join a swarm cluster as manager

The above picture indicates that the initialization is successful! Note:-- token indicates that the period is 24 hours.

Configure node02, node03 to join, The operation to leave the swarm cluster # node02 is as follows: # [root@node02 ~] # docker swarm join-- token SWMTKN-1-4pc1gjwjrp9h5dny52j58m0lclq88ngovis0w3rinjd05lklu5-ay18vjhwu7w8gsqvct84fv8ic 192.168.1.1:2377#node03 # # [root@node03 ~] # docker swarm join-token SWMTKN-1-4pc1gjwjrp9h5dny52j58m0lclq88ngovis0w3rinjd05lklu5-ay18vjhwu7w8gsqvct84fv8ic 192.168.1.1:2377//node02, Node03 is joined as worker by default. The operation of # node01 is as follows: # [root@node01 ~] # docker node ls / / View node details (can only be viewed as manager) ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSIONmc3xn4az2r6set3al79nqss7x * node01 Ready Active Leader 18.09.0olxd9qi9vs5dzes9iicl170ob node02 Ready Active 18.09.0i1uee68sxt2puzd5dx3qnm9ck node03 Ready Active 18.09.0 / you can see node01, The operations of node02 and node03 with the status of Active#node02 are as follows: # [root@node02 ~] # docker swarm leave#node03: # [root@node03 ~] # docker swarm leave//node02, The operation of node03 requesting to leave the cluster # node01 is as follows: # [root@node01 ~] # docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSIONmc3xn4az2r6set3al79nqss7x * node01 Ready Active Leader 18.09.0olxd9qi9vs5dzes9iicl170ob node02 Down Active 18.09.0i1uee68sxt2puzd5dx3qnm9ck node03 Down Active 18.09.0 Compact / you can see node02, The status of node03 is Down [root@node01 ~] # docker node rm node02 [root@node01 ~] # docker node rm node03//node01 remove node02 and node03 from the cluster / / if the worker node does not resign Manager nodes can use "- f" to indicate forced opening of worker nodes

The above command can add and delete a node to the cluster.

[root@node01 ~] # docker swarm leave-f//manager node exits the swamr cluster environment, which means that the swarm cluster is disbanded

However, worker identity is used when joining. If you want the node to join the cluster as manager, you need to use the following command:

[root@node01 ~] # docker swarm join-token manager / / query commands to join a cluster as manager [root@node01 ~] # docker swarm join-token worker// query commands to join a cluster as worker

As shown in the figure:

# the operation of # node02 is as follows # [root@node02 ~] # docker swarm join-- token SWMTKN-1-2c0gcpxihwklx466296l5jp6od31pshm04q990n3ssncby3h0c-78rnxee2e990axj0q7td74zod 192.168.1.1:2377#node03 is as follows # [root@ Node03 ~] # docker swarm join-- token SWMTKN-1-2c0gcpxihwklx466296l5jp6od31pshm04q990n3ssncby3h0c-78rnxee2e990axj0q7td74zod 192.168.1.1:2377//node02, Node03 joins the cluster as manager and joins # node01 as follows: # [root@node01 ~] # docker node ls / / View the details of the node ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSIONexr8uoww0eih53iujqz5cbv6q * Node01 Ready Active Leader 18.09.0r35f48huyw5hvnkuzatrftj1r node02 Ready Active Reachable 18.09.0gsg1irl1bywgdsmfawi9rna7p node03 Ready Active Reachable 18.09.0 node01 Ready Active Leader 18.09.0r35f48huyw5hvnkuzatrftj1r node02 Ready Active Reachable 18.09.0gsg1irl1bywgdsmfawi9rna7p node03 Ready Active Reachable / you can see it from the column MANAGER STATUS

Although you can specify to use manager and worker identity when joining the cluster, you can also downgrade and upgrade by using the following command, as follows:

[root@node01 ~] # docker node demote node02 [root@node01 ~] # docker node demote node03// downgrade node02 and node03 to worker [root@node01 ~] # docker node promote node02 [root@node01 ~] # docker node promote node03// upgrade node02 and node03 to manager// self-verification 5. Deploy graphical UI

The deployment of a graphical UI interface is done by node01!

[root@node01 ~] # docker run-d-p 8080 HOST=172.16.0.10-e PORT=8080-v / var/run/docker.sock:/var/run/docker.sock-- name visualizer dockersamples/visualizer//-e HOST specifies the container

Use a browser to access:

If the browser can be accessed normally, the deployment of the graphical UI interface is complete!

VI. Service service configuration of docker swarm cluster

Node01 publishes a task that runs six containers (which must be on the host of the manager role) with the following command:

[root@node01] # docker service create-- replicas 6-- name web-p 80:80 nginx//-- replicas: number of copies; it can be understood that a copy is a container

After the container is running, you can log in to the web page to view it, as shown in the figure:

Note: if there is no corresponding wake-up on the other two node servers, it will be downloaded automatically from docker Hub by default!

[root@node01 ~] # docker service ls / / View the created serviceID NAME MODE REPLICAS IMAGE PORTSnbfzxltrcbsk web replicated 6 nginx:latest 6 nginx:latest *: 80-> 80/tcp [root@node01 ~] # docker service ps web / / Check that the created service runs on those containers ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTSv7pmu1waa2ua web.1 nginx:latest node01 Running Running 6 minutes ago l112ggmp7lxn Web.2 nginx:latest node02 Running Running 5 minutes ago prw6hyizltmx web.3 nginx:latest node03 Running Running 5 minutes ago vg38mso99cm1 web.4 nginx:latest node01 Running Running 6 minutes ago v1mb0mvtz55m web.5 nginx:latest node02 Running Running 5 minutes ago 80zq8f8252bj web.6 nginx:latest node03 Running Running 5 minutes ago

If node02 and node03 are down now, the service will not die immediately because of the node, but will automatically run to the normal node.

Simulate node02 downtime. The web page is as follows:

The recovery node02,web page is as follows:

Even if node02 returns to normal, service will not be assigned to node02.

Therefore, it can be concluded that if node fails, service will automatically run to available nodes; conversely, if node does not fail, service will not easily change nodes by default!

7. Realize the expansion and contraction of service

Capacity expansion: just add a few service

Shrink: just reduce the number of service

Achieve contraction and expansion in view of the above environment

(1) service expansion [root@node01 ~] # docker service scale web=8// originally had 6 service, but now it has increased to 8.

The web page is as follows:

The allocation of service to that node node is based on docker swarm's own algorithm.

(2) service shrinks [root@node01] # docker service scale web=4// originally had 8 service, but now it is reduced to 4.

The web page is as follows:

(3) set a node not to run service

In the above environment, all three are manager. Even one manager and two worker manager are working by default. Downgrade node02 and node03 to worker, and execute the following command:

[root@node01 ~] # docker node demote node02 [root@node01 ~] # docker node demote node03

As shown in the figure:

You can set a node not to run service, as shown below:

[root@node01 ~] # docker node update-- availability drain node01// sets noder01 not to run containers, but containers that are already running will not stop / /-- availability: there are three options that can be configured after the option: active: work; pause: do not work temporarily; drain: permanently do not work

The web page is as follows:

[root@node01 ~] # docker node update-- availability drain node02//node02 does not work either, but containers that are already running will not stop

As shown in the figure:

It can be concluded that manager is not the only one who has the right not to work!

VIII. Docker swarm network

Docker swarm clusters generate two different types of traffic:

Control and management level: including swarm message management, such as requests to join or leave swarm, this type of traffic is always encrypted (involving hostname, ip-address, subnet, gateway, etc.); application data level: including communication between container and client (involving firewall, port mapping, VIP, etc.)

There are three important concepts in swarm:

Overlay networks: manages communication between Docker daemons in swarm. You can attach services to one or more existing overlay networks to enable communication between services; ingress network: a special overlay network for load balancing between service nodes. Of course, any swarm node receives the request on the published port and hands it to a module called IPVS. IPVS tracks all IP addresses participating in the service, selects one of them, and sends requests to it over the ingress network; docker_gwbridge: a bridging network that connects overlay networks (including ingress networks) to the physical network of a separate Docker daemon. By default, each container in which the service is running is connected to the docker_gwbridge network of the local Docker daemon host

Docker_gwbridge networks are created automatically when you initialize a Swarm or join a Swarm. In most cases, users do not need custom configuration, but Docker allows customization

View the default network on node01, as shown in the figure:

Note: the SCOPE in the figure, pay attention to its scope of action!

In addition to the two networks created by default by the Swarm cluster, we can also customize the creation of an overlay network and connect to the container of this network to communicate with each other, but it is important to note that the custom network can only be seen by the created manager host!

Create a custom overlay network

[root@node01 ~] # docker network create-d overlay-- subnet 200.0.0.0 attachable my_net//Docker swarm 24-- gateway 200.0.0.1-- when an attachable my_net//Docker swarm cluster creates an overlay network, the name must be specified using the-- name option, otherwise the container of other nodes cannot use this network at runtime

When creating a custom overlay network, it must be a manager that can be created and is displayed only on the manager node. Other worker nodes are not visible, but can be used.

As follows:

[root@node02 ~] # docker run-itd-- name test01-- ip 200.0.0.10-- network my_net busybox//node02 creates a container using the overlay network just customized [root@node03 ~] # docker run-itd-- name test02-- network my_net-- ip 200.0.0.20 busybox//node03 uses a custom overlay network to create a container

Test access:

This is node02, node03 has used this custom network, which can be found on node02 and node03!

And the overlay network created based on docker swarm also accords with the characteristics of custom cross-host network, and can use hostname to communicate.

IX. Build a private warehouse for registry

By building a private registry repository, it is easy for other node to download images. To build a private warehouse, please refer to Docker to build a private warehouse.

[root@node01 ~] # docker run-itd-- name registry-p 5000itd 5000-v / registry:/var/lib/registry-- restart=always registry:2 [root@node01 ~] # vim / usr/lib/systemd/system/docker.service / / change the content to ExecStart=/usr/bin/dockerd-- insecure-registry 192.168.1.1 VOL5000 [root@node01 ~] # systemctl daemon-reload [root@node01 ~] # systemctl restart docker [root@node01 ~] # docker ps-a-Q | Xargs docker start// is not set to start automatically because of the container just created. So when you restart the docker service, You need to manually start [root@node01 ~] # ssh-keygen-t rsa [root@node01 ~] # ssh-copy-id root@node02 [root@node01 ~] # ssh-copy-id root@node03// to set up secret-free login [root@node01 ~] # scp / usr/lib/systemd/system/docker.service root@node02:/usr/lib/systemd/system/docker.service [root@node01 ~] # scp / usr/lib/systemd/system/docker.service root@node03:/usr/ Lib/systemd/system/docker.service// copies the configuration file of docker to node02, On node03 Because the configuration files of docker are all the same [root@node02 ~] # systemctl daemon-reload [root@node02 ~] # systemctl restart docker [root@node03 ~] # systemctl daemon-reload [root@node03 ~] # systemctl restart docker// restart node02, The docker service of the node03 node [root@node01 ~] # docker pull httpd [root@node01 ~] # docker tag httpd:latest 192.168.1.1:5000/httpd:latest [root@node01 ~] # docker push 192.168.1.1:5000/httpd:latest / / upload the httpd image to the private repository 10. Upgrade and rollback of the service service version (1) preparation environment [root@node01 ~] # mkdir version {1Jue 2 three, } [root@node01 ~] # cd version1 [root@node01 version1] # echo "version1" > > index.html [root@node01 version1] # echo-e "FROM httpd:latest\ nADD index.html / usr/local/apache2/htdocs/index.html" > Dockerfile [root@node01 version1] # docker build-t 192.168.1.1:5000/httpd:v1. / / version1 directory under the simulated generation version v1 [root@node01 version1] # cp Dockerfile. / version2 [root@node01 version1] # cd! $cd.. / version2 [root@node01 version2] # echo "version2" > > index.html [root@node01 version2] # docker build-t 192.168.1.1:5000/httpd:v2. / / vesion2 directory under the simulation generation version v2 [root@node01 version2] # cp Dockerfile.. / version3 [root@node01 version2] # cd! $cd.. / version3 [root@node01 version3] # echo "version3" > > index.html [root@node01 version3] # docker build-t 192.168.1.1v5000 / httpd:v3. / / vesion3 directory to simulate the generated version v3 docker / pay attention to make some differences on the main page [root@node01 ~] # docker push 192.168.1.1:5000/httpd:v1 [root@node01 ~] # docker push 192.168.1.1:5000/httpd:v2 [root@node01 ~] # docker push 192.168.1.1:5000/httpd:v3// upload the generated image to the private repository [root@node01 ~] # docker Service create-- replicas 3-- name httpd 192.168.1.1:5000/httpd:v1// creates three service replicas based on 192.168.1.1:5000/httpd:v1

Browser access test:

Three copies of service are served by polling, tested according to the home page of node02 and node03!

Node02:

[root@node02 ~] # docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb6c1d88fcadf 192.168.1.1:5000/httpd:v1 "httpd-foreground" 4 minutes ago Up 4 minutes 80/tcp httpd.1.qubzhexjprpt7s89ku91mlle0 [root@node02 ~] # docker exec-it b6c1d88fcadf / bin/bashroot @ b6c1d88fcadf:/usr/local/apache2# echo "node02" > > htdocs/index.html

Node03

[root@node03 ~] # docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESee19c342188c 192.168.1.1:5000/httpd:v1 "httpd-foreground" 5 minutes ago Up 5 minutes 80/tcp httpd.3.9ru7zsokixz29iw99qbdp15gn [root@node03 ~] # docker exec-it ee19c342188c / bin/bashroot @ ee19c342188c:/usr/local/apache2# echo "node03" > > htdocs/index.html

Access Test:

[root@node01 ~] # curl 127.0.0.1version1node03 [root@node01 ~] # curl 127.0.0.1version1 [root@node01 ~] # curl 127.0.0.1version1node02// effect has been achieved (2) version upgrade [root@node01 ~] # docker service update-- the image of the image 192.168.1.1:5000/httpd:v2 httpd// update container is version 2

The browser tests:

By default, swarm updates only one copy at a time, and there is no wait time between the two copies, which can be set by the above method.

[root@node01 ~] # docker service update--replicas 6-- image 192.168.1.1:5000/httpd:v3-- update-parallelism 2-- update-delay 1m httpd//--update-parallelism: set the number of updated replicas; / /-- update-delay: interval between updates / /-- replicas 6: and create 3 more replicas during the upgrade

You can see the effect from the process of updating!

The browser confirms that the version was updated successfully:

(3) version rollback [root@node01 ~] # docker service rollback httpd// rollback is the previous version

Browser access test:

Note: when we perform a rollback operation, the default is to roll back to the version of the previous operation, and cannot be rolled back continuously.

11. Docker Swarm Cluster Common Command [root@node02 ~] # docker swarm leave / / which node wants to exit the swarm cluster Execute this command on that node / / the node automatically exits the swarm cluster (equivalent to resignation) [root@node01 ~] # docker node rm node name / / manager actively deletes the node (equivalent to dismissal) [root@node01 ~] # docker node promote node name / / upgrade the node [root@node01 ~] # docker node demote node name / / downgrade the node [root@node01 ~] # docker node ls / / View swarm cluster information (can only be viewed on hosts in the manager role) [root@node01 ~] # docker node update-- availability drain node name / / adjust node not to participate in the work [root@node01 ~] # docker swarm join-token worker// View tokens joining the swarm cluster (either worker or manager) [root@node01 ~] # docker service scale web=4// expansion, Shrink the number of servie of a swarn cluster (depending on the original number of clusters) / / more than the original number of clusters Is to expand the capacity, otherwise, shrink [root@node01 ~] # docker service ls// to view the name of the created service [root@node01 ~] # docker service ps service / / View the created service running on those containers [root@node01 ~] # docker service create-- replicas 6-- name web-p 80:80 the number of service copies of the specified nginx// line [root@node01 ~] # docker service create-- constraint node.hostname==node03-- name test nginx// specify the node node Create a container named test [root@node01 ~] # docker node update-- label-add mem=max node02// labels the docker02 host "mem=max" as a key-value pair The contents on both sides of the equal sign are customizable [root@node01 ~] # docker service create-- name test1-- replicas 3-- constraint 'node.labels.mem==max' nginx// running three services named test1 on the host labeled "mem==max" based on the nginx image. Information related to the [root@node01 ~] # docker node inspect node02// label, it is shown in Spec {} that the host names of the participating clusters must not conflict. And can resolve each other's host names. All nodes in the cluster can be manager roles, but not all worker roles. When you specify a running image, if the node in the cluster does not have the image locally, it will automatically download the corresponding image. When the cluster is working normally, if a docker server running containers goes down, then all the containers it runs will be transferred to other normally running nodes, and even if the server that is down returns to normal operation, it will no longer take over the previously running containers.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report