In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Swarm was a separate project before Docker version 1.12, and after the release of Docker version 1.12, the project was merged into Docker and became a subcommand of Docker. Currently, Swarm is the only tool provided by the Docker community that natively supports Docker cluster management. It can transform a system composed of multiple Docker hosts into a single virtual Docker host, so that containers can form a subnet network across hosts.
1. Swarm knows
Swarm is currently the only officially designated (bound) cluster management tool for Docker. Swarm mode cluster management mode is embedded in Docker 1.12.
To facilitate the demonstration of cross-host networks, we need to use a tool-Docker Machine, which is called Docker three Musketeers along with Docker Compose and Docker Swarm. Let's take a look at how to install Docker Machine:
$curl-L https://github.com/docker/machine/releases/download/v0.9.0-rc2/docker-machine-`uname-s`-`uname-m` > / tmp/docker-machine & & chmod + x / tmp/docker-machine & & sudo cp / tmp/docker-machine / usr/local/bin/docker-machine
The installation process is very similar to Docker Compose. Now all three swordsmen of Docker have arrived.
Before we begin, we need to understand some basic concepts. The Docker commands for clustering are as follows:
Docker swarm: cluster management. Subcommands include init, join,join-token, leave, update
Docker node: node management. Subcommands include demote, inspect,ls, promote, rm, ps, update
Docker service: service management. Subcommands include create, inspect, ps, ls, rm, scale, update
Docker stack/deploy: experimental features for multi-application deployment until the official version is added.
two。 Create a cluster
First use Docker Machine to create a virtual machine as a manger node.
$docker-machine create-- driver virtualbox manager1 Running pre-create checks... (manager1) Unable to get the latest Boot2Docker ISO release version: Get https://api.github.com/repos/boot2docker/boot2docker/releases/latest: dial tcp: lookup api.github.com on [:: 1]: 53: server misbehavingCreating machine... (manager1) Unable to get the latest Boot2Docker ISO release version: Get https://api.github.com/repos/boot2docker/boot2docker/releases/latest: dial tcp Lookup api.github.com on [:: 1]: 53: server misbehaving (manager1) Copying / home/zuolan/.docker/machine/cache/boot2docker.iso to / home/zuolan/.docker/machine/machines/manager1/boot2docker.iso... (manager1) Creating VirtualBox VM... (manager1) Creating SSH key... (manager1) Starting the VM... (manager1) Check network to re-create if needed... (manager1) Found a new host-only adapter: "vboxnet0" (manager1) Waiting for an IP...Waiting for machine to be running This may take a few minutes...Detecting operating system of created instance...Waiting for SSH to be available...Detecting the provisioner...Provisioning with boot2docker...Copying certs to the local machine directory...Copying certs to the remote machine...Setting Docker configuration on the remote daemon...Checking connection to Docker...Docker is up and running!To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env manager1
View information such as environment variables of the virtual machine, including the IP address of the virtual machine:
$docker-machine env manager1export DOCKER_TLS_VERIFY= "1" export DOCKER_HOST= "tcp://192.168.99.100:2376" export DOCKER_CERT_PATH= "/ home/zuolan/.docker/machine/machines/manager1" export DOCKER_MACHINE_NAME= "manager1" # Run this command to configure your shell: # eval $(docker-machine env manager1)
Then create another node as the work node.
$docker-machine create-driver virtualbox worker1
Now we have two virtual hosts, which can be viewed using the command of Machine:
$docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORSmanager1-virtualbox Running tcp://192.168.99.100:2376 v1.12.3 worker1-virtualbox Running tcp://192.168.99.101:2376 v1.12.3
But at present, there is no connection between the two virtual hosts, in order to connect them, we need Swarm to debut.
Because we are using a virtual machine created by Docker Machine, we can use the docker-machine ssh command to manipulate the virtual machine. In the actual production environment, we do not need to do the following, we just need to execute docker swarm.
Add manager1 to the cluster:
$docker-machine ssh manager1 docker swarm init-- listen-addr 192.168.99.100 advertise-addr 192.168.99.100Swarm initialized: current node (23lkbq7uovqsg550qfzup59t6) is now a manager.To add a worker to this swarm, run the following command: docker swarm join\-- token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r\ 192.168.99.100:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Use-- listen-addr to specify the listening ip and port. The actual Swarm command format is as follows. In this example, Docker Machine is used to connect to the virtual machine:
$docker swarm init-listen-addr:
Next, add work1 to the cluster:
$docker-machine ssh worker1 docker swarm join-token\ SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r\ 192.168.99.100:2377This node joined a swarm as a worker.
You can add-listen-addr $WORKER1_IP:2377 to the above join command as preparation for listening, because sometimes you may encounter the possibility of promoting a work node to a manger node, but this parameter will not be added without this intention in this example.
Note: if you encounter a dual network card when creating a new cluster, you can specify which IP to use. For example, the above example may encounter the following error.
$docker-machine ssh manager1 docker swarm init-listen-addr $MANAGER1_IP:2377Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (10.0.2.15 on eth0 and 192.168.99.100 on eth2)-specify one with-advertise-addrexit status 1
The error occurs because there are two IP addresses, and Swarm doesn't know which one the user wants to use, so specify IP.
$docker-machine ssh manager1 docker swarm init-- advertise-addr 192.168.99.100-- listen-addr 192.168.99.100 Swarm initialized 2377 Swarm initialized: current node (ahvwxicunjd0z8g0eeosjztjx) is now a manager.To add a worker to this swarm, run the following command: docker swarm join\-- token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r\ 192.168.99.100:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
The cluster initialized successfully.
Now that we have created a new "cluster" with two nodes, we now enter one of the management nodes and use the docker node command to view the node information:
$docker-machine ssh manager1 docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS23lkbq7uovqsg550qfzup59t6 * manager1 Ready Active Leaderdqb3fim8zvcob8sycri3hy98a worker1 Ready Active
Now each node belongs to Swarm and is in standby state. Manager1 is the leader and work1 is the worker.
Now, let's continue to create new virtual machines manger2, worker2, worker3. Now that we have five virtual machines, use docker-machine ls to view the virtual machines:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORSmanager1-virtualbox Running tcp://192.168.99.100:2376 v1.12.3 manager2-virtualbox Running tcp://192.168.99.105:2376 v1.12.3 worker1-virtualbox Running tcp://192.168.99.102:2376 v1.12.3 worker2-virtualbox Running tcp://192.168.99.103:2376 v1 .12.3 worker3-virtualbox Running tcp://192.168.99.104:2376 v1.12.3
Then we add the remaining virtual machines to the cluster.
Add worker2 to the cluster:
$docker-machine ssh worker2 docker swarm join\-token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r\ 192.168.99.100:2377This node joined a swarm as a worker.
Add worker3 to the cluster:
$docker-machine ssh worker3 docker swarm join\-token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-c036gwrakjejql06klrfc585r\ 192.168.99.100:2377This node joined a swarm as a worker.
Add manager2 to the cluster:
First obtain the token of manager from manager1:
$docker-machine ssh manager1 docker swarm join-token managerTo add a manager to this swarm, run the following command: docker swarm join\-- token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-8tn855hkjdb6usrblo9iu700o\ 192.168.99.100purl 2377
Then add manager2 to the cluster:
$docker-machine ssh manager2 docker swarm join\-token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-8tn855hkjdb6usrblo9iu700o\ 192.168.99.100:2377This node joined a swarm as a manager.
Now let's look at the cluster information:
$docker-machine ssh manager2 docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS16w80jnqy2k30yez4wbbaz1l8 worker1 Ready Active 2gkwhzakejj72n5xoxruet71z worker2 Ready Active 35kutfyn1ratch65fn7j3fs4x worker3 Ready Active a9r21g5iq1u6h41myprfwl8ln * manager2 Ready Active Reachabledpo7snxbz2a0dxvx6mf19p35z manager1 Ready Active Leader
3. Establish a cross-host network
In order to demonstrate more clearly, let's add the host to the cluster so that we can use the Docker command to operate much more clearly.
Execute the join cluster command directly locally:
$docker swarm join\-token SWMTKN-1-3z5rzoey0u6onkvvm58f7vgkser5d7z8sfshlu7s4oz2gztlvj-8tn855hkjdb6usrblo9iu700o\ 192.168.99.100:2377This node joined a swarm as a manager.
Now we have three manager and three worker. One of them is a host and five virtual machines.
$docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS6z2rpk1t4xucffzlr2rpqb8u3 worker3 Ready Active 7qbr0xd747qena4awx8bx101s * user-pc Ready Active Reachable9v93sav79jqrg0c7051rcxxev manager2 Ready Active Reachablea1ner3zxj3ubsiw4l3p28wrkj worker1 Ready Active a5w7h8j83i11qqi4vlu948mad worker2 Ready Active d4h7vuekklpd6189fcudpfy18 manager1 Ready Active Leader
View network status:
$docker network lsNETWORK ID NAME DRIVER SCOPE764ff31881e5 bridge bridge local fbd9a977aa03 host host local 6p6xlousvsy2 ingress overlay swarm e81af24d643d none null local
You can see that there is already an overlay network named ingress on swarm by default, which is used in swarm by default. In this example, a new overlay network will be created.
$docker network create-driver overlay swarm_test4dm8cy9y5delvs5vd0ghdd89s$ docker network lsNETWORK ID NAME DRIVER SCOPE764ff31881e5 bridge bridge localfbd9a977aa03 host host local6p6xlousvsy2 ingress overlay swarme81af24d643d none null local4dm8cy9y5del swarm_test overlay swarm
Such a cross-host network has been built, but now the network is only in a standby state, and we will deploy applications on this network in the next section.
4. Deploy applications on a cross-host network
First of all, the nodes we created above are not mirrored, so we need to pull them one by one, here we use the private repository built earlier.
$docker-machine ssh manager1 docker pull reg.example.com/library/nginx:alpine alpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ... 7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine$ docker-machine ssh manager2 docker pull reg.example.com/library/nginx:alpinealpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ... 7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine$ docker-machine ssh worker1 docker pull reg.example.com/library/nginx:alpine alpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ... 7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine$ docker-machine ssh worker2 docker pull reg.example.com/library/nginx:alpinealpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ... 7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine$ docker-machine ssh worker3 docker pull reg.example.com/library/nginx:alpinealpine: Pulling from library/nginxe110a4a17941: Pulling fs layer... ... 7648f5d87006: Pull completeDigest: sha256:65063cb82bf508fd5a731318e795b2abbfb0c22222f02ff5c6b30df7f23292feStatus: Downloaded newer image for reg.example.com/library/nginx:alpine
The above uses docker pull to pull nginx:alpine images on each of the five virtual machine nodes. Next we will deploy a set of Nginx services on five nodes.
Deployed services use swarm_test across host networks.
$docker service create-replicas 2-name helloworld-network=swarm_test nginx:alpine5gz0h3s5agh3d2libvzq6bhgs
View service status:
$docker service lsID NAME REPLICAS IMAGE COMMAND5gz0h3s5agh3 helloworld 0ta 2 nginx:alpine
View the details of the helloworld service (the output has been adjusted for ease of reading):
$docker service ps helloworldID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERRORay081uome3 helloworld.1 nginx:alpine manager1 Running Preparing 2 seconds ago 16cvore0c96 helloworld.2 nginx:alpine worker2 Running Preparing 2 seconds ago
You can see that the two instances are running on two nodes.
Go to the two nodes and check the service status (the output has been adjusted for ease of reading):
$docker-machine ssh manager1 docker ps-aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES119f787622c2 nginx:alpine "nginx-g..." 4 minutes ago Up 4 minutes 80/tcp, 443/tcp hello... $docker-machine ssh worker2 docker ps-aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES5db707401a06 nginx:alpine "nginx-g..." 4 minutes ago Up 4 minutes 80/tcp, 443/tcp hello.
The above output has been adjusted, and the actual NAMES value is:
Helloworld.1.ay081uome3eejeg4mspa8pdlxhelloworld.2.16cvore0c96rby1vp0sny3mvt
Remember the names of the above two instances. Now let's see if the two cross-host containers can communicate with each other:
First use Machine to enter the manager1 node, and then use the docker exec-I command to enter the helloworld.2 container where ping runs on the worker2 node in the helloworld.1 container.
$docker-machine ssh manager1 docker exec-I helloworld.1.ay081uome3eejeg4mspa8pdlx\ ping helloworld.2.16cvore0c96rby1vp0sny3mvtPING helloworld.2.16cvore0c96rby1vp0sny3mvt (10.0.0.4): 56 data bytes64 bytes from 10.0.0.4: seq=0 ttl=64 time=0.591 ms64 bytes from 10.0.0.4: seq=1 ttl=64 time=0.594 ms64 bytes from 10.0.0.4: seq=2 ttl=64 time=0.624 ms64 bytes from 10.0.0.4: seq=3 ttl=64 time=0.612 Ms ^ C
Then use Machine to enter the worker2 node, and then use the docker exec-I command to enter the helloworld.1 container where ping runs on the manager1 node in the helloworld.2 container.
$docker-machine ssh worker2 docker exec-I helloworld.2.16cvore0c96rby1vp0sny3mvt\ ping helloworld.1.ay081uome3eejeg4mspa8pdlx PING helloworld.1.ay081uome3eejeg4mspa8pdlx (10.0.0.3): 56 data bytes64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.466 ms64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.465 ms64 bytes from 10.0.0.3: seq=2 ttl=64 time=0.548 ms64 bytes from 10.0.0.3: seq=3 ttl=64 time=0.689 Ms ^ C
You can see that the containers in the two cross-host service clusters can be connected to each other.
To demonstrate the advantages of Swarm clustering, we can use the ping command of the virtual machine to test the container in the other virtual machine.
$docker-machine ssh worker2 ping helloworld.1.ay081uome3eejeg4mspa8pdlxPING helloworld.1.ay081uome3eejeg4mspa8pdlx (221.179.46.190): 56 data bytes64 bytes from 221.179.46.190: seq=0 ttl=63 time=48.651 ms64 bytes from 221.179.46.190: seq=1 ttl=63 time=63.239 ms64 bytes from 221.179.46.190: seq=2 ttl=63 time=47.686 ms64 bytes from 221.179.46.190: seq=3 ttl=63 time=61.232 Ms ^ C $docker-machine ssh manager1 ping helloworld.2.16cvore0c96rby1vp0sny3mvtPING helloworld.2.16cvore0c96rby1vp0sny3mvt (221.179.46.194): 56 data bytes64 bytes from 221.179.46.194: seq=0 ttl=63 time=30.150 ms64 bytes from 221.179.46.194: seq=1 ttl=63 time=54.455 ms64 bytes from 221.179.46.194: seq=2 ttl=63 time=73.862 ms64 bytes from 221.179.46.194: seq=3 ttl=63 time=53.171 Ms ^ C
Above, we used the ping inside the virtual machine to test the latency of the container, and we can see that the latency is significantly higher than the ping value within the cluster.
5. Swarm cluster load
Now that we have learned how to deploy a Swarm cluster, let's build an accessible Nginx cluster. Experience the automatic service discovery and cluster load capabilities provided by the latest version of Swarm.
First, delete the helloworld service we started in the previous section:
$docker service rm helloworld helloworld
Then create a new service that provides port mapping parameters so that these Nginx services can be accessed by the outside world:
$docker service create-- replicas 2-- name helloworld-p 7080 name helloworld-- network=swarm_test nginx:alpine9gfziifbii7a6zdqt56kocyun
View the running status of the service:
$docker service ls ID NAME REPLICAS IMAGE COMMAND9gfziifbii7a helloworld 2Acer 2 nginx:alpine
I don't know if you have noticed that although the values of the-replicas parameter are all the same, when we got the service status in the previous section, REPLICAS returned 0 replicas 2, and now REPLICAS returns 2 prime 2.
When you also use docker service ps to view the detailed status of the service (the following output has been manually adjusted to a more readable format), you can see that the CURRENT STATE of the instance is in Running status, while the CURRENT STATE in the previous section is all in Preparing status.
$docker service ps helloworldID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR9ikr3agyi... Helloworld.1 nginx:alpine user-pc Running Running 13 seconds ago 7acmhj0u... Helloworld.2 nginx:alpine worker2 Running Running 6 seconds ago
This involves the built-in discovery mechanism of Swarm. At present, Swarm in Docker 1.12 has built-in service discovery tools, so we no longer need to use tools such as Etcd or Consul to configure service discovery. For a container, if there is no external communication but the running state is considered by the service discovery tool to be the Preparing state, in this example, there is a Running state because the port is mapped.
Now let's take a look at another interesting feature of Swarm, what happens when we kill one of the nodes.
First, kill the instance of dropping worker2:
$docker-machine ssh worker2 docker kill helloworld.2.7acmhj0udzusv1d7lu2tbuhu4helloworld.2.7acmhj0udzusv1d7lu2tbuhu4
Wait a few seconds, and then take a look at the service status:
$docker service ps helloworldID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR9ikr3agyi... Helloworld.1 nginx:alpine zuolan-pc Running Running 19 minutes ago 8f866igpl... Helloworld.2 nginx:alpine manager1 Running Running 4 seconds ago 7acmhj0u... \ _ helloworld.2 nginx:alpine worker2 Shutdown Failed 11 seconds ago... exit...$ docker service ls ID NAME REPLICAS IMAGE COMMAND9gfziifbii7a helloworld 2 Compact 2 nginx:alpine
You can see that even if we kill one of the instances, Swarm will quickly remove the stopped container and start a new instance on top of the node. So the service is still running with two instances.
If you want to add more instances at this time, you can use the scale command:
$docker service scale helloworld=3helloworld scaled to 3
Check the service details and you can see that three instances have been launched:
$docker service ps helloworldID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR9ikr3agyi... Helloworld.1 nginx:alpine user-pc Running Running 30 minutes ago 8f866igpl... Helloworld.2 nginx:alpine manager1 Running Running 11 minutes ago 7acmhj0u... \ _ helloworld.2 nginx:alpine worker2 Shutdown Failed 11 minutes ago exit1371vexr1jm... Helloworld.3 nginx:alpine worker2 Running Running 4 seconds ago
Now if you want to reduce the number of instances, you can also use the scale command:
$docker service scale helloworld=2helloworld scaled to 2
At this point, the main usage of Swarm has been introduced, mainly about the creation and deployment of Swarm cluster network. This paper introduces the general application of Swarm, including service discovery and load balancing of Swarm, and then uses Swarm to configure the cross-host container network and deploy the application on it.
More practical examples of Swarm will be written in the future, that's all.
Summary
The above is the full content of this article on the introduction of Docker Swarm examples, I hope it will be helpful to you, interested friends can refer to: Windows uses docker to open a new window error solution, detailed explanation of Docker using Linux iptables and Interfaces to manage container network, talk about Docker security mechanism kernel security and container network security, etc., if you have any questions, you can leave a message at any time, and the editor will reply you in time.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.