Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Practical practice of Docker Swarm Cluster configuration (2)

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Foreword:

This blog post is based on the docker Swarm cluster environment and extends the functions of Docker Swarm.

Blog outline:

I. Docker Swarm network management

II. Service management and version update of Swarm

The environment of this blog post is still based on the environment built in the previous blog post. For more information, please refer to the blog article: Docker Swarm Cluster configuration (1). Before you proceed with the next operation, you must ensure that when you visit docker Swarm's web UI, you can see the following interface:

I. Docker Swarm network management

Swarm clusters generate two different types of traffic:

Control and management level: including Swarm message management, such as requests to join or leave Swarm, this type of traffic is always encrypted. (involving hostname, ip-address, subnet, gateway, etc.) within the cluster; application data level: including the communication between container and client, etc. (involving firewall, port mapping, network port mapping, VIP, etc.)

There are three important network concepts in Swarm service:

Overlay networks manages communication between docker daemons in Swarm. Containers can be attached to one or more existing overlay networks to enable communication between containers; ingress network is a special overlay network used for load balancing among service nodes. When any Swarm node receives a request on the published port, it passes the request to a module called IPVS. IPVS tracks all IP addresses participating in the service, selects one of them, and routes requests to it over the ingress network

Ingress networks are automatically created when initializing or joining a Swarm cluster. In most cases, users do not need custom configuration, but docker 17.05 and later allow you to customize. Docker_gwbridge is a bridging network that connects overlay networks, including ingress networks, to a separate physical network of Docker daemons. By default, each container on which the service is running is connected to the docker_gwbridge network of the local Docker daemon host.

Docker_gwbridge networks are created automatically when initializing or joining the Swarm. In most cases, users do not need custom configuration, but Docker allows customization.

Check the default network above docker01, as follows (note its SCOPE column to confirm its effective scope):

In addition to the two networks created by default by the Swarm cluster, we can also customize the creation of an overlay network and connect to the container of this network to communicate with each other, but it should be noted that except for the overlay network created on the manager of docker01, it is not possible for other nodes to execute the "docker network ls" command before joining this network.

Create a custom overlay network and verify that [root@docker01 ~] # docker network create-d overlay-- subnet 192.168.22.0 subnet 24-- gateway 192.168.22.1-- attachable my_net1# create an overlay network named my_net1;# "--subnet": specify its network segment (may not be specified); "--gateway": specify its gateway (may not be specified) # but when creating an overlay network in a docker Swarm cluster, you must add the "--attachable" option # otherwise, this network cannot be used when the containers of other nodes are running

After the creation is completed, the newly created overlay network cannot be viewed on other docker nodes, but you can use this network (you can specify it directly when you run the container, and after the container runs, you can view this network)

Test whether the overlay network you just created is available. Run a container on docker01 and docker02 based on the created overlay network, and then do a ping test to confirm that you can communicate with ping:

# create a container based on overlay network on # docker01 host: [root@docker01 ~] # docker run-tid-- network my_net1-- name test1 busybox# is the same as docker01 operation, and also create one on docker02: [root@docker02 ~] # docker run-tid-- network my_net1-- name test2 busybox

After the container is created, use the test2 container to ping the container test1 on the docker02 host. The test results are as follows (since it is a custom network, you can directly ping the container name of the peer container):

2. Service management and version update of Swarm 1. Specify a service to run on the same docker server

In the first blog post, I was tested that if the manager in the Swarm cluster issues a service task, then the tasks will be randomly distributed to run on the docker servers in the cluster. If, due to the need to unify and standardize the configuration of their own production environment, a certain docker server, I will only run the web service, and another docker host, I will only run the PHP service, then how to solve it?

Solution one:

[root@docker01 ~] # docker service create-- replicas 3-- constraint node.hostname==docker03-- name test nginx# runs three containers named test on the docker03 host based on the nginx image

The execution of the above command is as follows:

Solution 2:

[root@docker01 ~] # docker node update-- label-add mem=max docker02# labels docker02 hosts with "mem=max" in the form of key-value pairs The contents on both sides of the equal sign are customizable [root@docker01 ~] # docker service create-- name test01-- replicas 3-- constraint 'node.labels.mem==max' nginx# running three services named test01 on the host labeled "mem==max" based on the nginx image [root@docker01 ~] # docker node inspect docker02 # you can execute this command to view the label # tag-related information of the dokcer02 host, in Spec {}

Check the web UI interface to confirm:

2. Update a service version 1) prepare the image to be used, and run service [root@docker01 aa] # cat html/index.html # based on this image to prepare the web page file 127.0.0.1 [root@docker01 aa] # cat Dockerfile # based on the nginx container Mount the html directory under the current directory as the nginx web page root directory FROM nginxADD html / usr/share/nginx/html [root@docker01 aa] # docker build-t 192.168.20.6:5000/testnginx:1.0. # generate an image [root@docker01 aa] # docker push 192.168.20.6:5000/testnginx:1.0# upload the newly generated image to the private repository [root@docker01 aa] # docker service create-- name newnginx-p 80:80-- replicas 3 192.168.20.6:5000/testnginx:1.0# runs three service based on the image uploaded to the private warehouse, and maps it to local port 80 # when the above command is executed successfully As long as that service is running on the docker host, you can access the nginx service through its port 80

After running, the web UI interface is displayed as follows:

You can see that each node runs that service, that is, if you access port 80 of any node, you can see the same page, as shown below:

View the details of service on docker01 as follows:

[root@docker01 aa] # docker service ps newnginx # View the details of service

The result of the execution of the command (note that its image label, that is, the image on which it is run):

2) prepare version 2.0 of the image (simulate online version upgrade): [root@docker01 aa] # docker tag nginx:latest 192.168.20.6:5000/testnginx:2.0 # prepare version 2.0 image [root@docker01 aa] # docker push 192.168.20.6:5000/testnginx:2.0 # upload to private repository [root@docker01 aa] # docker service update-- image 192.168.20.6:5000/testnginx:2 .0 newnginx # upgrade the image of the newnginx service to 2.0 [root@docker01 aa] # docker service ps newnginx # View the details of service again

The result of the command execution is as follows. It is found that the service status of the newnginx running based on 1.0 image has changed to shutdown, while the service running based on 2.0 has become running, as follows:

At this point, if you visit its web page again, it will become the default home page of nginx (because our 2.0 image only changes the tag of the nginx image and does not modify its file), as follows:

Its web UI interface can view the time of the last upgrade of the service.

3) upgrade 2.0 to 3.0 (when upgrading Fine control) [root@docker01 aa] # docker tag nginx:latest 192.168.20.6:5000/testnginx:3.0 # prepare version 3.0 image [root@docker01 aa] # docker push 192.168.20.6:5000/testnginx:3.0 # upload to private repository [root@docker01 ~] # docker service update--replicas 6-- image 192.168.20.6:5000/testnginx:3.0-- update-parallelism 3-- The meaning of the above option for update-delay 1m newnginx# is as follows: # "--replicas 6": the number of updated service is 6 (originally 3) # "--update-parallelism 2": set the number of copies for parallel updates. # "--update-delay 1m": specify a rolling update interval of 1 minute [root@docker01 ~] # docker service ps newnginx # compare newnginx service details by yourself 4) version rollback operation

When we upgrade to the new version and find that there is something wrong with the image of the new version, we have to return to the previously run version, so we can do the following:

[root@docker01 ~] # docker service update-- rollback newnginx # rolls back the service of newnginx to the previous version [root@docker01 ~] # docker service ps newnginx # for self-viewing

After executing the rollback command, the rollback process is as follows:

After the rollback is successful, I have changed from 3.0 to 2.0. Although the specified number of service is 6 when upgrading 3.0, there were only 3 before, so after the rollback operation, the number of service will be changed back to 3.

Note: when we perform a rollback operation, the default is to roll back to the version of the previous operation, and cannot be rolled back continuously.

-this is the end of this article. Thank you for reading-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report