Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Practical practice of Docker Swarm Cluster configuration (1)

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Foreword:

Like Docker Compose, Docker Swarm is the official container orchestration project of Docker, but the difference is that Docker Compose is a tool for creating multiple containers on a single server or host, while Docker Swarm can create container cluster services on multiple servers or hosts. Obviously, Docker Swarm is more suitable for micro-service deployment.

Since Docker version 1.12.0, Docker Swarm has been included in the Docker engine (docker swarm), and the service discovery tool has been built in, so we don't need to configure Etcd or Consul for service discovery configuration as before.

There are three roles in the Docker Swarm cluster: manager (manager), worker (actual worker), and service (service).

In the above three roles, they are essentially similar to the organizational structure of our company, there are leaders (manager), there are brick movers (worker), and the task assigned by the leader to the brick movers is the service (service) in Docker Swarm.

It is important to note that in a Docker Swarm cluster, the role of each docker server can be manager, but it cannot all be worker, that is, it cannot be leaderless, and all hostnames participating in the cluster must not conflict.

Here is a case study to show the configuration of the Docker Swarm cluster.

Blog outline:

I. Environmental preparation

2. Configure the host docker01

Configure docker02 and docker03 to join the Swarm cluster

Fourth, build a private warehouse for registry

5. Docker01 deploys the web UI interface of docker Swarm clusters

VI. Service service configuration of docker Swarm cluster

7. Realize the expansion and reduction of docker container.

VIII. Additional commands commonly used in docker Swarm clusters

IX. Summary of docker Swarm

I. Environmental preparation

"in the above hosts, the role of the host docker01 is specified as manager, and the role of the other hosts is worker."

2. Configure the host docker01

The following action initializes a Swarm cluster and specifies that the role of docker01 is manager.

# since some configuration files need to be copied between the three hosts, configure secret-free login [root@docker01 ~] # ssh-keygen # on docker01 to generate key pairs Press enter all the way to generate [root@docker01 ~] # tail-3 / etc/hosts # configuration / etc/hosts file # resolve each other among the three hosts (this configuration is also required for Swarm clusters) 192.168.20.6 docker01192.168.20.7 docker02192.168.20.8 docker03 [root@docker01 ~] # ssh-copy-id docker02 # send the generated key to docker02root@docker02 s password: # to enter the root secret of docker02 The code [root@docker01 ~] # ssh-copy-id docker03 # sends the secret key to docker03 You also need to enter the root password of docker03 [root@docker01 ~] # scp / etc/hosts docker02:/etc/ # to send the hosts file to docker02 [root @ docker01 ~] # scp / etc/hosts docker03:/etc/ # and send the hosts file to docker03 [root @ docker01 ~] # docker swarm init-advertise-addr 192.168.20.6 # initialize a cluster and designate yourself as manager

When you do this and specify yourself to initialize a group for manager, a series of prompts are returned with the successful execution of the command that tells you that if other nodes need to join this node, the commands that need to be executed will be replicated directly, and then executed on the host that needs to join the cluster to join the cluster successfully.

The prompt returned is as follows:

In the above, now that you have given the appropriate command, you are going to configure the docker server that needs to join the cluster.

Configure docker02 and docker03 to join the Swarm cluster # docker02 to execute the following command: [root@docker02 ~] # docker swarm join-- token SWMTKN-1-5ofgk6fh2vey2k7qwsk4gb9yohkxja6hy8les7plecgih2xiw1-3vpemis38suwyxg3efryv5nyu 192.168.20.6:2377#docker03 also execute the following command [root@docker03 ~] # docker swarm join-- token SWMTKN-1-5ofgk6fh2vey2k7qwsk4gb9yohkxja6hy8les7plecgih2xiw1-3vpemis38suwyxg3efryv5nyu 192.168.20.6 token SWMTKN-1 2377 [root@docker01 ~] # docker node promote docker02 # upgrade docker02 from worker to manager.

At this point, docker02 and 03 have joined the cluster as worker.

If docker02 or docker03 wants to leave this cluster, the following configuration is required (take docker03 as an example):

# remove docker03 from this cluster [root@docker03 ~] # docker swarm leave # execute this command [root@docker01 ~] # docker node rm docker03 # on docker03 and then remove docker03 [root@docker01 ~] # docker swarm leave-f # on the server of manager role. If you delete the cluster on the last manager, you need to add the "- f" option # after the last deletion, the cluster will not exist. 4. Build registry private warehouse

In a docker Swarm cluster, a private warehouse does not affect the normal operation of its cluster, but most of the company's production environment is its own private warehouse, so here is a simulation.

[root@docker01 ~] # docker run-d-- name registry-- restart always-p 5000 registry # run a registry warehouse container [root@docker01 ~] # vim / usr/lib/systemd/system/docker.service # modify docker configuration file to specify private warehouse ExecStart=/usr/bin/dockerd-H unix://-- insecure-registry 192.168.20.65000 restart always # navigate to change line, specify private warehouse IP and port # after editing Save and exit [root@docker01 ~] # systemctl daemon-reload # reload configuration file [root@docker01 ~] # systemctl restart docker # restart docker service # docker02 and docker03 also need to specify the location of the private warehouse So execute the following command to copy the changed docker configuration file to the past [root@docker01 ~] # scp / usr/lib/systemd/system/docker.service docker02:/usr/lib/systemd/system/ [root@docker01 ~] # scp / usr/lib/systemd/system/docker.service docker03:/usr/lib/systemd/system/# after copying the docker configuration file The docker services of docker02 and 03 need to be restarted # the following commands need to be run once on the servers of docker02 and 03: [root@docker02 ~] # systemctl daemon-reload [root@docker02 ~] # systemctl restart docker

After the completion of the private warehouse, it is best to test whether it can be used properly, as follows:

# docker01 upload the httpd image to the private repository [root@docker01 ~] # docker tag httpd:latest 192.168.20.6:5000/lvjianzhao:latest [root@docker01 ~] # docker push 192.168.20.6:5000/lvjianzhao:latest# and download it on dokcer02. Test whether [root@docker02 ~] # docker pull 192.168.20.6:5000/lvjianzhao:latest# can be downloaded normally, indicating that the private warehouse is available.

The process of building a private warehouse above does not achieve data persistence. If you need to build a private warehouse based on data persistence, please refer to the blog: Registry private warehouse of Docker + Harbor private warehouse.

5. Docker01 deploy web UI interface of docker Swarm cluster [root@docker01 ~] # docker run-d-p 8000web UI 8080-e HOST=172.16.20.6-e PORT=8080-v / var/run/docker.sock:/var/run/docker.sock-- after executing the above command, the client can access its 8000 access, and you can see the node information in the cluster # if the node fails, it will be detected immediately

When you access port 8000 of docker01, you can see the following interface (this interface can only be viewed and cannot be configured):

At this point in the configuration, the docker Swarm cluster is basically complete, and then we begin to show what the cluster can do. That is, the stage of configuring its service service.

VI. Service service configuration of docker Swarm cluster

1. On docker01 (which must be on the host of the manager role), publish a task and run six containers using the httpd image you just uploaded during the test. The command is as follows:

[root@docker01 ~] # docker service create-- replicas 6-- name lvjianzhao-p 80 192.168.20.6:5000/lvjianzhao:latest # in the above command, the "--replicas" option is used to specify the number of containers to run

After running six container copies, you can view the web UI interface of the cluster, which is shown as follows:

Note: docker03 does not download the corresponding image, but it will also run the httpd service, so you can draw a conclusion: if the docker host does not have the specified image, it will automatically download the corresponding image.

As you can see, after the above configuration, the three servers in the cluster are running two containers based on the httpd image. There are six in total:

[root@docker01 ~] # docker service ls # check the status of service ID NAME MODE REPLICAS IMAGE PORTS13zjbf5s02f8 lvjianzhao replicated 6 192.168.20.6:5000/lvjianzhao:latest 6 192.168.20.6:5000/lvjianzhao:latest *: 3000-> 80 / TCP 7 to expand and reduce the capacity of the docker container

What is capacity expansion? What is downsizing? Just in the case that the container can not bear the current load pressure, expand several of the same containers, reduce the capacity? That is, in the case of a large number of container resources idle, reduce several of the same containers.

1. The following is the capacity expansion and reduction of the containers of the six httpd services created above:

1) expansion of the container:

[root@docker01 ~] # docker service scale lvjianzhao=9 # expand the capacity of running httpd containers to 9

After the expansion, the web UI interface is shown as follows:

2) reduction of container capacity

[root@docker01 ~] # docker service scale lvjianzhao=3 reduces the number of containers for nine httpd services to three

After downsizing, the UI interface is shown as follows:

2. Set a docker server not to run the container

In the above configuration, if you run a specified number of containers, it will be polled by all docker hosts in the cluster until the specified number of containers is run. What if you don't want the manager role of docker01 to run containers? (the leaders of the company will not go to the front line to move bricks.) the following configurations can be made:

[root@docker01 ~] # docker node update-- availability drain docker01# sets the host docker01 not to run containers, but containers that are already running will not stop. There are three options that can be configured after the # "--availability" option, as follows: # "active": working; "pause": not working temporarily. "drain": permanent inactivity 8. Add-- docker Swarm Cluster Common commands [root@docker01 ~] # docker node ls # View cluster information (only available on hosts in the manager role) [root@docker01 ~] # docker swarm join-token worker # if you need to join the worker side at a later stage You can execute this command to view the token (that is, the command to be executed when joining) [root@docker01 ~] # docker swarm join-token manager # ditto, to add the manager side, you can execute this command to view the token. Dynamic expansion and reduction of [root@docker01 ~] # docker service scale web05=6 # Container [root@docker01 ~] # docker service ps web01 # View the nodes on which the created container runs [root@docker01 ~] # docker service ls # View the created service # detach the docker03 from the cluster [root@docker03 ~] # docker swarm leave # docker03 from the cluster [root@docker01 ~] # docker node rm docker03 # then remove docker03 [root@docker01 ~] # docker node promote docker02 # on the server of the manager role and upgrade docker02 from worker to manager. # after upgrade, docker02 status will be Reachable [root@docker01 ~] # docker node demote docker02 # downgrade docker02 from manager role to worker [root@docker01 ~] # docker node update-- availability drain docker01# sets the host docker01 not to run the container However, the running container will not stop [root@docker01 ~] # docker node update-- label-add mem=max docker03# changes the label of the docker03 host to mem=max [root@docker01 ~] # docker service update-- replicas 8-- image 192.168.20.6:5000/lvjianzhao:v2.0-- container-label-add 'node.labels.mem==max' lvjianzhao05# to upgrade the service to 8 containers, and specifies to run docker Swarm summary on the host labeled by mem=max.

After I have some knowledge of docker Swarm clusters, the conclusions are as follows:

The host names of the participating clusters must not conflict and can resolve each other's host names; all nodes in the cluster can be manager roles, but not all worker roles; when you specify a running image, if the node in the cluster does not have the image locally, it will automatically download the corresponding image When the cluster is working normally, if a docker server running containers goes down, then all the containers it runs will be transferred to other normally running nodes, and even if the server that is down returns to normal operation, it will no longer take over the previously running containers.

For more information about the features of docker Swarm clustering, you can read the blog post: Docker Swarm Cluster configuration (2)

-this is the end of this article. Thank you for reading-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 276

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report