Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of Building continuous Integration Cluster Service based on docker-swarm

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to build a continuous integration cluster service based on docker-swarm, I believe most people do not know much about it, so share this article for your reference. I hope you will learn a lot after reading this article. Let's learn about it together.

In order to simulate the cluster environment on the native (macOS), vb and docker-machine are used. Several machine facilities for overall continuous integration are as follows:

1. Service node: three manager nodes and one worker node. Manager requires more resources, and the manager configuration is as high as possible. The fault tolerance rate of the manager node of swarm is (Nmur1) / 2. N is the number of manager nodes. That is, if there are three manager, one manager node can be tolerated to hang up. Official algorithm description: Raft consensus in swarm mode.

2. Local image repository registry: used to store all the service docker images that need to be deployed.

Https://docs.docker.com/registry/deploying/

Because of the use of swarm mechanism, there is no need to consider service discovery and load balancing (instead of the original consul®istor approach) in the communication between services.

3. Build the operation and maintenance node ops of the image:

That is, the operation and maintenance machine. A separate node is fine. Mainly responsible for build image, push image. You can build a private library of gitlab in ops. Maintain the build script. The required configuration of the machine is not high, so there is as much broadband as possible for the network.

Simulating Cluster Environment with docker-machine

Create a registry node

Docker-machine create-d virtualbox--virtualbox-memory "512" registry

-engine-registry-mirror this parameter allows you to set the address of some accelerated repositories.

Create a manager,worker node

Manager

The copy code is as follows:

Docker-machine create-d virtualbox--virtualbox-memory "800s" manager1

Worker:

Docker-machine create-d virtualbox--virtualbox-memory "800" worker1 docker-machine create-d virtualbox--virtualbox-memory "800" worker2 docker-machine create-d virtualbox--virtualbox-memory "800" worker3

Create an ops node

Docker-machine create-d virtualbox--virtualbox-memory "512" ops

View the status of the machine list

Docker-machine ls

Create a registry service

Log in to the registry machine.

Docker-machine ssh registry

Create a registry service.

Docker run-d-p 5000 data:/var/lib/registry 5000-- restart=always-- name registry\-v `pwd` / data:/var/lib/registry\ registry:2

The command sets the-v volumn option so that the mirrored data that has been pull will not be lost each time the container service is restarted. For containers of storage types such as registry,mysql, it is recommended to set volumn. For better expansion, you can also back up the image repository to other driver, such as Aliyun's OSS.

Run docker ps and you can see a started registry service. Of course, for better expansion, you can also mount it under your own domain name and add authentication information when you re-run.

To make it easier to manage containers, you can use docker-compose components. Installation:

Curl-L "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-$(uname-s)-$(uname-m)"-o / usr/local/bin/docker-compose

You can also start it directly after writing the compose file:

Docker-compose up-d

Local repository push image

Now you can try to pull an image, and then tag to the local registry repository, such as

Docker pull lijingyao0909/see:1.0.3 & & docker tag lijingyao0909/see:1.0.3 localhost:5000/see:1.0.3

Then execute the push command:

Docker push localhost:5000/see:1.0.3

This image is push to the registry service. The most direct way is to view the image data by looking at the local volumn directory, such as the data directory in this example.

For more convenient visual management of registry, you can also try registry UI-related images such as hyper/docker-registry-web.

Docker run-it-p 8080 name registry-web-- link registry-srv-e REGISTRY_URL= http://registry-srv:5000/v2-e REGISTRY_NAME=localhost:5000 hyper/docker-registry-web

Then visit hostname:5000/registory/index to see a simple list of images, UI.

Https problem

If you perform the above steps while testing locally, you will encounter the following problems when pull mirroring in push or other vb:

Error response from daemon: Get https://registry:5000/v1/_ping: dial tcp 218.205.57.154:5000: i/o timeout

The way to handle this is to modify the authentication of registry, such as "/ var/lib/boot2docker/profile" first:

Sudo vi / var/lib/boot2docker/profile

Add

DOCKER_OPTS= "- insecure-registry: 5000" DOCKER_OPTS= "- insecure-registry registry:5000"

Because the hostname of registry is registry. So when you execute the docker ifno command, you can see:

Insecure Registries: registry:5000 127.0.0.0/8

At the same time, on other worker machines, the manager machine also needs to modify the-insecure-registry attribute to pull the image of the private library. Re-restart vb is required after modification.

After reboot, try pull again in manager

Docker pull registry:5000/see:1.0.3

You can see that the repository is connected successfully and the image is pulled. Note that this example uses the machine name, registry, rather than the IP address. Therefore, when pulling the image, you need to configure the mapping of ip and machine name in the etc/hosts file of the respective vb. The operation is easy to remember by using the machine name. Of course, the best way is to access the warehouse through the domain name.

Reference resources

Deploy registry services

Create an ops service

The swarm service cluster will pull the image directly from the registry and start the service service of the application directly. The image of the service is directly in the registry repository, but the source code can be maintained on the ops machine. The virtual-box of ops created earlier can deploy the gitlab service. For startup parameters, please refer to Deploy GitLab Docker images.

First pull a gitlab image

Docker run-- detach\-- hostname gitlab.lijingyao.com\-- publish 443hostname gitlab.lijingyao.com 443-- publish 80:80-- publish 22:22\-- name gitlab\-- restart always\-- volume `pwd` / gitlab/config:/etc/gitlab\-- volume `pwd` / gitlab/logs:/var/log/gitlab\-- volume `pwd` / gitlab/data:/var/opt/gitlab\ gitlab/gitlab-ce:8.14.4-ce.0

Use the git private library

Since port 80 is bound, access: http://machine-host/ after starting gitlab

The first time you enter gitlab, you will automatically jump to reset your password. You can set a new password. This is the password for the root account. Later, you can register for other git users.

Here, if you apply for a domain name service, or bind gitlab.lijingyao.com locally to the ip address of this virtualbox, you can access the gitlab.lijingyao.com address directly. In the actual production environment, if you have a fixed public network ip and have your own dns service, you do not need to bind host. This is just a local test, so for the time being, bind the host.

Swarm

The service in this example is a simple springboot and gradle project, and the service image can be found in docker hub pull,see service image. After the image is packaged, push it to the registry repository directly in gradle task. In the local environment, it can be executed directly in the project directory. Gradle task then push to the registry of vb, and then pull image in the registry repository. Now prepare to initialize the swarm cluster.

Now for the machines of the entire vb cluster, check out the following:

$docker-machine lsNAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORShaproxy-virtualbox Running tcp://192.168.99.103:2376 v1.12.3 manager1-virtualbox Running tcp://192.168.99.100:2376 v1.12.3 ops-virtualbox Running tcp://192.168.99.106:2376 v1.12.3 registry-virtualbox Running tcp://192.168.99.107:2376 v1.12.3 worker1-virtualbox Running tcp://192 .168.99.101: 2376 v1.12.3 worker2-virtualbox Running tcp://192.168.99.102:2376 v1.12.3 worker3-virtualbox Running tcp://192.168.99.105:2376 v1.12.3

Then log in to the manager1 machine using docker-machine ssh manager1.

Initialize the swarm manager node

Initialize swarm on the manager1 machine, and the initialized machine is swarm's manager. Execute:

Docker swarm init-- advertise-addr 192.168.99.100

You will see the following execution output:

Swarm initialized: current node (03x5vnxmk2gc43i0d7xpycvjg) is now a manager.To add a worker to this swarm, run the following command: docker swarm join\-token SWMTKN-1-5ru6lyco3upj7oje6hidug3erqczok84wk7bekzfaca4uv51r9-22bcjhkbxnclmw3nl3ui8601l\ 192.168.99.100:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The generated token value is the key from other swarm cluster nodes join to the entire swarm. If you forget token, you can execute it on manager1:

$docker swarm join-token manager

To see the current token value. Officials recommend that the following token be replaced for at least 6 months. Replacement command:

$docker swarm join-token-rotate worker

Add a worker nod

Log in to worker1,worker2,worker3 and execute the join command respectively.

Before join, take a look at the docker network facilities. Execution

$docker network lsNETWORK ID NAME DRIVER SCOPE4b7fe1416322 bridge bridge local 06ab6f3352b0 host host local eebd5c8e0d5d none null local

Follow the command after manager1 initialization, execute:

Docker swarm join\-token SWMTKN-1-5ru6lyco3upj7oje6hidug3erqczok84wk7bekzfaca4uv51r9-22bcjhkbxnclmw3nl3ui8601l\ 192.168.99.100mm 2377

At this point, if you execute docker network ls on any worker node, you can see that there is one more network channel with swarm coverage of overlay.

After all three worker are joined, you can see the node node of manager on manager1.

Docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS03x5vnxmk2gc43i0d7xpycvjg * manager1 Ready Active Leader 2y5wrndibe8c8sqv6851vrlgp worker1 Ready Active Reachabledwgol1uinkpsybigc1gm5jgsv worker2 Ready Active etgyky6zztrapucm59yx33tg1 worker3 Ready Active Reachable

The Reachable status of the manager status indicates that the node is also a manager node. This is because we have implemented separately in worker1,worker3

Docker node promote worker1docker node promote worker3

Worker1,worker3 can also execute the swarm command at this time, and when manager1 shuts down, one of them will be elected as the new leader. If you want to remove the manager state of node, you can remove it with the demote command. After execution, the worker node becomes a normal task node.

Docker node demote worker1 worker3

Other states of the swarm node

The swarm node can set the drain state, and the node in the drain state does not perform any service.

Make a node setting unavailable:

Docker node update-availability drain worker1

There are three states of Pause,Drain,Active. Pause identifies that there are tasks running and does not accept new tasks.

If you want to remove the worker1 node from the swarm center, the node that needs to be removed now (worker1) executes: docker swarm leave and then executes: docker node rm worker1 on the manager, which removes a swarm node.

Create a swarm service

In this example, swarm is used to deploy a springboot-based rest api service. The address of the warehouse: springboot-restful-exam, the service name created is deftsee, binds port 80, and extends 4 running container services.

Docker service create\-- replicas 4\-- name deftsee\-- update-delay 10s\-- publish 8080 name deftsee 80\ lijingyao0909/see:1.0.3

After the service is created, you can view the status of the service node

Docker@manager1:~$ docker service lsID NAME REPLICAS IMAGE COMMANDa6s5dpsyz7st deftsee 4/4 lijingyao0909/see:1.0.3

REPLICAS represents the number of containers in which the service is running. If it is 0x4, it means that all the services are not up. To view the running status of each node in detail, you can use docker service ps servicename

Docker@manager1:~$ docker service ps deftseeID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR8lsdkf357lk0nmdeqk7bi33mp deftsee.1 lijingyao0909/see:1.0.3 worker2 Running Running 5 minutes ago cvqm5xn7t0bveo4btfjsm04jp deftsee.2 lijingyao0909/see:1.0.3 manager1 Running Running 7 minutes ago 6s5km76w2vxmt0j4zgzi4xi5f deftsee.3 lijingyao0909/see:1.0.3 worker1 Running Running 5 minutes ago 4cl9vnkssedpvu2wtzu6rtgxl deftsee.4 lijingyao0909/see:1.0.3 worker3 Running Running 6 minutes ago

You can see that the task is equally divided into all four task nodes to run. Next, expand the deftsee service.

Docker@manager1:~$ docker service scale deftsee=6deftsee scaled to 6 docker manager1 minutes ago cvqm5xn7t0bveo4btfjsm04jp deftsee.2 lijingyao0909/see:1.0.3 manager1 Running Running $docker service ps deftseeID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR8lsdkf357lk0nmdeqk7bi33mp deftsee.1 lijingyao0909/see:1.0.3 worker2 Running Running 8 minutes ago cvqm5xn7t0bveo4btfjsm04jp deftsee.2 lijingyao0909/see:1.0.3 manager1 Running Running 10 minutes ago 6s5km76w2vxmt0j4zgzi4xi5f deftsee.3 lijingyao0909/see:1.0.3 worker1 Running Running 8 minutes ago 4cl9vnkssedpvu2wtzu6rtgxl deftsee.4 lijingyao0909/see:1.0.3 worker3 Running Running 9 minutes ago 71uv51uwvso4l340xfkbacp2i deftsee.5 lijingyao0909/see:1.0 . 3 manager1 Running Running 5 seconds ago 4r2q7q782ab9fp49mdriq0ssk deftsee.6 lijingyao0909/see:1.0.3 worker2 Running Running 5 seconds ago

Lijingyao0909/see:1.0.3 is the image of dockerhub's public repository. When the service is created, it will go to pull image. The overall speed is too slow, so you can combine with private repository to pull image directly on registry machine. Services can remove services directly with docker service rm deftsee and then rebuild services through registry.

Docker service create\-- replicas 6\-- name deftsee\-- update-delay 10s\-- publish 8080 name deftsee 80\ registry:5000/see:1.0.4

Log in to any worker service at this time to view the running container image:

Docker@worker2:~$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES89d4f588290b registry:5000/see:1.0.4 "/ bin/sh-c 'java-Dc" About a minute ago Up About a minute 8080/tcp deftsee.1.eldpgb1aqtf9v49cxolydfjm9

If you want to update the service, you can update the version directly through the update command, and the service is published scrolling. You can change the delay time of each node during the update by setting *-update-delay 10s *.

Docker service update-image registry:5000/see:1.0.5 deftsee

Restart a node service

Off: docker node update-availability drain worker1

Enable: docker node update-availability active worker1

Update service port

Updating the port of a service restarts the service (shut down the original service, recreate the service and start it):

Docker service update\-- publish-add:\ docker@manager1:~$docker service update\-- publish-add 8099 docker@manager1:~$docker service update 8080\ deftseedocker@manager1:~$ docker service ps deftseeID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR3xoe34msrht9eqv7eplnmlrz5 deftsee.1 registry:5000/see:1.0.4 manager1 Running Running 39 seconds ago eldpgb1aqtf9v49cxolydfjm9\ _ deftsee.1 registry:5000/see:1.0.4 worker2 Shutdown Shutdown 39 seconds ago 9u4fh4mi5kxb14y6gih6d8tqv deftsee.2 registry:5000/see:1.0.4 manager1 Running Running about a minute ago 0skgr5fx4xtt6y71yliksoft0\ _ deftsee.2 registry:5000/see:1.0.4 worker1 Shutdown Shutdown about a minute ago 8hposdkqe92k7am084z6kt1j0 deftsee.3 registry:5000/see:1.0.4 worker3 Running Running about a minute ago c5vhx1wx0q8mxaweaq0mia6n7\ _ deftsee.3 registry:5000/see:1.0.4 manager1 Shutdown Shutdown about a minute ago 9se1juxiinmetuaccgkjc3rr2 deftsee.4 registry:5000/see:1.0.4 worker1 Running Running about a minute ago 4wofho0axvrjildxhckl52s41\ _ deftsee.4 registry:5000/see:1.0.4 worker3 Shutdown Shutdown about a minute ago

Service validation and networking

After the service in the example is started, it can be accessed directly through ip:port. For example, http://192.168.99.100:8099/see, you can see that service requests are distributed to each running node. That is, the overlay network layer of swarm, the network of each node is interconnected. Swarm has done load balance, and you can also build a self-defined overlay network on the basis of swarm's lb. The created overlay network, all nodes can communicate with this network. However, the network option needs to be established when the service is created.

$docker network create\-- driver overlay\-- subnet 10.0.9.0 my-network$docker service create 24\-- opt encrypted\ my-network$docker service create\-- name deftsee\-- publish 8099 com.df.serviceDomain=deftsee.com 80\-- replicas 4\-- network my-network\-l com.df.serviceDomain=deftsee.com\-l com.df.notify=true\ lijingyao0909/see:1.0.3

-network my-network defines the docker network to which the service can connect, and you can create a network with a name of my-network on the nodes of the swarm. Therefore, the service discovery and lb mechanisms of consul and haproxy can also be built in the swarm mechanism.

When a network is specified for a service, the tasks performed on the swarm must also be on that specified network in order to interoperate with the service. If the node is not added to the node in swarm mode, or if it is not running and mounted on the specified network, it will not communicate with the network. The docker network ls will not find out the network. When you create a service, you link to the network with the tag-network my-network. When viewing the network, docker network inspect my-network can view the mount containers for this node listed in the returned Containers.

A network service is created, and when a service is connected to the network, swarm assigns a vip to the service (service) under this network. Lb within swarm automatically distributes services. Without specifying each service port, that is, a container connected to the same network, the service can be accessed through service name. Because all containers added to this network share a DNS map through the gossip protocol (the vip map is based on the dns alias map to which the service name is bound).

View the vip network information of the service:

$docker service inspect\-- format=' {{json .Endpoint.VirtualIPs}}'\ deftsee output: [{"NetworkID": "dn05pshfagohpebgonkhj5kxi", "Addr": "10.255.0.6max 16"}]

Swarm management

In order to maintain the availability of the manager node (heartbeat mechanism, leader election), we can set the manager node to run without service, save the manager node resources, and isolate the manager node from the task environment.

Docker node update-availability drain

Backup / var/lib/docker/swarm/raft status

Clean up unavailable nodes

Docker node demote docker node rm.

Node rejoins manager re-join

$docker node demote. $docker node rm. $docker swarm join.

Specify fixed ip, init-advertise-addr during initialization. Worker nodes can use dynamic ip.

These are all the contents of this article entitled "how to build continuous Integration Cluster Services based on docker-swarm". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report