Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build Docker Swarm Cluster Environment and deploy flexible Services

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

It is believed that many inexperienced people have no idea about how to build Docker Swarm cluster environment and deploy flexible services. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

Preparation of cluster building environment

Five CentOS machines with Docker installed, version 7.8.2003 for CentOS

Docker Engine 1.12 + (minimum requirement 1.12, 19.03.12 is used in this article)

The firewall opens the following ports or turns off the firewall:

TCP port 2377 for cluster management communication

TCP and UDP port 7946 for communication between nodes

UDP port 4789, used to override the network.

Machine distributed role IPHOSTNAMEDocker version Manager192.168.10.101manager119.03.12Manager192.168.10.102manager219.03.12Manager192.168.10.103manager319.03.12Worker192.168.10.10worker119.03.12Worker192.168.10.11worker219.03.12

The hostname of the machine can be modified through the hostname hostname (effective immediately, invalid after restart)

Or hostnamectl set-hostname hostname change the hostname of the machine (effective immediately, restart also effective)

Or vi / etc/hosts edit the hosts file, as shown below, to add the hostname to 127.0.0.1 (restart takes effect).

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 manager1::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 create a cluster

Create a new Swarm cluster and join it with the docker swarm init command under any node, and the node will become the Manager node by default. According to our predefined roles, you can run this command on any machine from 101 to 103.

Typically, the first management node to join the cluster will be Leader, and the subsequent management node will be Reachable. If the current Leader dies, all Reachable will re-elect a new Leader.

[root@localhost] # docker swarm init-- advertise-addr 192.168.10.101Swarm initialized: current node (clumstpieg0qzzxt1caeazg8g) is now a manager.To add a worker to this swarm, run the following command: docker swarm join--token SWMTKN-1-5ob7jlej85qsygxubqypjuftiwruvew8e2cr4u3iuo4thxyrhg-3hbf2u3i1iagurdprl3n3yra1 192.168.10.101:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Join the cluster

The cluster mode built into Docker comes with a public key infrastructure (PKI) system, which makes it easy to deploy containers securely. The nodes in the cluster use the Transport layer Security Protocol (TLS) to authenticate, authorize, and encrypt the communications of other nodes in the cluster.

By default, when you create a new Swarm cluster with the docker swarm init command, the Manager node generates a new root certificate authority (CA) and key pair to secure communication with other nodes that join the cluster.

The Manager node generates two tokens for other nodes to use when they join the cluster: a Worker token and a Manager token. Each token includes a summary of the root CA certificate and a randomly generated key. When a node joins the cluster, the joined node uses a summary to validate the root CA certificate from the remote management node. The remote management node uses a key to ensure that the joined node is an approved node.

Manager

To add a Manager node to the cluster, the management node first runs the docker swarm join-token manager command to view the token information of the management node.

Docker swarm join-token manager

Then run docker swarm join on another node and join the Swarm cluster with token parameters, which has the role of Manager.

Worker

As you can see from the results returned when you create the cluster, to add a Worker node to the cluster, run the command in the following figure. Or the management node first runs the docker swarm join-token worker command to view the token information of the worker node.

Then run docker swarm join on another node and join the Swarm cluster with token parameters, which has the role of Worker.

View cluster information

Run docker info in any Manager node to view information about the current cluster.

View cluster nodes

Run docker node ls in any Manager node to view the current cluster node information.

Docker node ls

* represents the current node, and the current environment consists of three management nodes, one master, two slaves, and two working nodes.

Node MANAGER STATUS description: indicates whether the node belongs to Manager or Worker, and if there is no value, it belongs to Worker node.

Leader: this node is the master node of the management node and is responsible for the cluster management and scheduling decisions of the cluster.

Reachable: this node is the slave node of the management node, and if the Leader node is not available, it is eligible to be selected as the new Leader

Unavailable: this management node can no longer communicate with other management nodes. If the management node is not available, you should add a new management node to the cluster, or upgrade the work node to a management node.

Node AVAILABILITY description: indicates whether the scheduler can assign tasks to this node.

Active: the scheduler can assign tasks to this node

Pause: the scheduler will not assign new tasks to this node, but existing tasks can still run

Drain: the scheduler does not assign new tasks to this node and closes all existing tasks on that node and schedules them on available nodes.

Delete Node Manager

Before deleting a node, you need to change the node's AVAILABILITY to Drain. The purpose is to migrate the services of this node to other available nodes to ensure that the services are normal. It is best to check the container migration to make sure that this step has been processed before moving on.

Docker node update-- availability drain node name | Node ID

Then, the Manager node is degraded to a Worker node.

Docker node demote Node name | Node ID

Then, run the following command on the node that has been degraded to Worker to leave the cluster.

Docker swarm leave

Finally, delete the node you just left in the management node.

Docker node rm Node name | Node IDWorker

Before deleting a node, you need to change the node's AVAILABILITY to Drain. The purpose is to migrate the services of this node to other available nodes to ensure that the services are normal. It is best to check the container migration to make sure that this step has been processed before moving on.

Docker node update-- availability drain node name | Node ID

Then, run the following command in the Worker node that you are about to delete to leave the cluster.

Docker swarm leave

Finally, delete the node you just left in the management node.

Docker node rm Node name | Node ID service deployment

Note: any operations related to cluster management are performed on the Manager node.

Create a service

In the following example, a service called mynginx is created using the nginx image, which is randomly assigned to a worker node to run.

Docker service create-- replicas 1-- name mynginx-p 80:80 nginx

Docker service create: create a service

-- replicas: specify how many instances of a service are running

-- name: service name.

View Servic

You can view the running services through docker service ls.

[root@manager1 ~] # docker service lsID NAME MODE REPLICAS IMAGE PORTShepx06k5ik5n mynginx replicated 1amp 1 nginx:latest *: 80-> 80/tcp

You can view the details of the service through the docker service inspect service name | service ID.

[root@manager1 ~] # docker service inspect mynginx [{"ID": "k0dbjg1zzy3l3g71kdwa56ect", "Version": {"Index": 127}, "CreatedAt": "2020-09-16T10:05:55.627974095Z", "UpdatedAt": "2020-09-16T10:05:55.629507771Z", "Spec": {"Name": "mynginx" "Labels": {}, "TaskTemplate": {"ContainerSpec": {"Image": "nginx:latest@sha256:c628b67d21744fce822d22fdcc0389f6bd763daac23a6b77147d0712ea7102d0", "Init": false, "StopGracePeriod": 10000000000, "DNSConfig": {} "Isolation": "default"}, "Resources": {"Limits": {}, "Reservations": {}}, "RestartPolicy": {"Condition": "any" "Delay": 5000000000, "MaxAttempts": 0}, "Placement": {"Platforms": [{"Architecture": "amd64" "OS": "linux"}, {"OS": "linux"} {"OS": "linux"}, {"Architecture": "arm64", "OS": "linux"} {"Architecture": "386", "OS": "linux"}, {"Architecture": "mips64le" "OS": "linux"}, {"Architecture": "ppc64le", "OS": "linux"} {"Architecture": "s390x", "OS": "linux"}]}, "ForceUpdate": 0, "Runtime": "container"} "Mode": {"Replicated": {"Replicas": 1}}, "UpdateConfig": {"Parallelism": 1, "FailureAction": "pause", "Monitor": 5000000000, "MaxFailureRatio": 0 "Order": "stop-first"}, "RollbackConfig": {"Parallelism": 1, "FailureAction": "pause", "Monitor": 5000000000, "MaxFailureRatio": 0, "Order": "stop-first"} "EndpointSpec": {"Mode": "vip", "Ports": [{"Protocol": "tcp", "TargetPort": 80, "PublishedPort": 80 "PublishMode": "ingress"}, "Endpoint": {"Spec": {"Mode": "vip" "Ports": [{"Protocol": "tcp", "TargetPort": 80, "PublishedPort": 80, "PublishMode": "ingress"}]} "Ports": [{"Protocol": "tcp", "TargetPort": 80, "PublishedPort": 80, "PublishMode": "ingress"}] "VirtualIPs": [{"NetworkID": "st2xiy7pjzap093wz4w4u6nbs", "Addr": "10.0.0.15 + 24"}]}]

You can use the docker service ps service name | Service ID to see which nodes the service is running on.

Run docker ps on the corresponding task node to view the relevant information of the container corresponding to the service.

Invoke the service

Next, let's test whether the service can be accessed properly, and the IP address of any node under the cluster must be able to access the service.

Test results: all 5 machines can access the service normally.

Flexible service

After deploying service to the cluster, you can flexibly expand and reduce the number of containers in the service by command. The container that runs in service is called task.

Through the docker service scale service name | Service ID=n, you can scale down the tasks run by service to n.

Capacity expansion can also be achieved through docker service update-- replicas n service name | Service ID.

Expand the tasks that mynginx service runs to 5:

[root@manager1] # docker service scale mynginx=5mynginx scaled to 5overall progress: 5 out of 5 tasks 1 verify 5: running [= = >] 2ax 5: running [= = >] 3Unix 5: running [= = >] 4max 5: running [= = >] 5pm 5: running [= = >] verify: Service converged

Use the docker service ps service name | service ID to see which nodes the service is running on.

Let's start another round of downsizing operations with the following command:

[root@manager1] # docker service update-- replicas 3 mynginxmynginxoverall progress: 3 out of 3 tasks 1 Service converged 3: running [= = >] 2 running 3: running [= = >] 3 verify 3: verify: Service converged

Use the docker service ps service name | service ID to see which nodes the service is running on.

In the Swarm cluster mode, the so-called flexible service is realized in the real sense, and the dynamic expansion and reduction of one-line command is done, which is simple, convenient and powerful.

Delete a service

You can delete a service through the docker service rm service name | Service ID.

[root@manager1 ~] # docker service rm mynginxmynginx [root@manager1 ~] # docker service lsID NAME MODE REPLICAS IMAGE PORTS rolling update and rollback

The following example demonstrates how the Redis version can be scrolled up to a later version and rolled back to the previous operation.

First, create five copies of the Redis service, version 5, with the following detailed command:

# create 5 replicas, 2 at a time, update interval 10%. 20% of tasks fail to continue execution, and roll back 2 docker service create-- replicas 5-- name redis\-- update-delay 10s\-- update-parallelism 2\-- update-failure-action continue\-- rollback-monitor 20s\-- rollback-parallelism 2\-- rollback-max-failure-ratio 0.2\ redis:5

-- update-delay: defines the interval between rolling updates

-- update-parallelism: defines the number of copies for parallel updates. Default is 1.

-- update-failure-action: defines the actions to be performed after the container fails to start

-- rollback-monitor: defines the monitoring time for rollback

-- rollback-parallelism: defines the number of copies to be rolled back in parallel

-- rollback-max-failure-ratio: the rollback rate of task failure. If the rollback operation is performed beyond this ratio, 0.2 means 20%.

Then use the following command to implement the rolling update of the service.

Docker service update-image redis:6 redis

The rollback service can only be rolled back to the state of the previous operation, not continuously to the specified operation.

Docker service update-rollback redis

Common commands docker swarm command description docker swarm init initialization cluster docker swarm join-token worker view work node's tokendocker swarm join-token manager view management node's tokendocker swarm join join cluster docker node command description docker node ls view all nodes in the cluster docker node ps view all tasks docker node rm node names of the current node | Node ID delete node (- f forced deletion) docker node inspect node name | Node ID view node details Situation docker node demote Node name | Node ID Node degraded Downgrade from management node to work node docker node promote node name | Node ID node upgrade Upgrade from worker node to management node docker node update node name | Node ID update node docker service command description docker service create create service docker service ls view all service docker service inspect service names | Service ID view service details docker service logs service name | Service ID view service log docker service rm service name | Service ID deletion service (- f forced deletion) docker service scale service name | Service ID=n setup service Number of services docker service update service name | Service ID update service after reading the above Have you mastered how to build a Docker Swarm cluster environment and deploy elastic services? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report