In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
If you can't understand the content of the article, finally, there is a small experiment to find out.
Docker swarm Cluster: one of the three Musketeers prepares: docker01docker02docker03192.168.1.10192.168.1.20192.168.1.30myvisualizer.tarnginx.tarnginx.tarnginx.tar turns off firewall, disables selinux,3 dockerhost distinguishing hostname, time synchronization [root@docker01 ~] # systemctl stop firewalld [root@docker01 ~] # systemctl disable firewalld [root@docker01 ~] # setenforce 0 time synchronization: mv / etc/localtime/etc/localtime. The bkcp / usr/share/zoneinfo/Asia/Shanghai/etc/localtimedocker version must be v1.12 starting. [root@docker01 ~] # docker-vDocker version 18.09.0, add domain name resolution for each build 4d60db4: [root@docker01 ~] # vim / etc/hosts192.168.1.10 docker01192.168.1.20 docker02192.168.1.30 docker03
Swarm: a cluster of hosts running docker engin (engine).
Node: each docker engin is a (node) node, divided into manager and worker.
manager node: responsible for performing choreography and cluster management, keeping and maintaining swarm in the desired state. Swram can have multiple manager node, and they will automatically negotiate and elect a Leader to perform the orchestration task. But on the contrary, you can't do without manager node.
worker node: accepts and executes tasks dispatched by manager node, and the default manager node is also a worker node, but you can set it to manager-only node so that it is only responsible for orchestration and management.
Service: used to define commands executed on the worker.
1) initialize cluster docker initialization: [root@docker01 ~] # docker swarm init-- advertise-addr 192.168.1.10
/ /-- advertis-addr: specify the address of other node communications.
The result returned above tells us that the initialization was successful, and if you want to add a work node, run the following command docker swarm join-- token SWMTKN-1-0tx0cf540mq3stxknq8xlv2183ymeeld9zvxen7x1tepw1z2un-e6onamnuj4nck4bw8k294ujnn 192.168.1.10 token SWMTKN-1 2377
PS: note here that token is only valid for 24 hours.
If you want to add a manager node, run the following command: docker swarm join-token managerdocker02 and docker03 run: [root@docker02 ~] # docker swarm join--token SWMTKN-1-0tx0cf540mq3stxknq8xlv2183ymeeld9zvxen7x1tepw1z2un-e6onamnuj4nck4bw8k294ujnn 192.168.1.10 docker03 2377 when the other two nodes join successfully, we can use docker node ls to view the node information
Docker01 View:
[root@docker01 ~] # docker node ls
PS:
*: current terminal
Ready: prepare the newspaper, you can go to work
Active: running
Dokcer02 and docker03 operations:
Docker swarm leave: apply to leave a cluster (automatic termination), then check that the status of the node changes to down, and then you can delete it through manager node.
Automatic resignation: [root@docker02 ~] # docker swarm leave Node left the swarm.
Operations on docker01:
Docker node rm xxx: delete a node (fired).
Dismissal: [root@docker01] # docker node rm docker02docker02 [root@docker01] # docker node rm docker03docker03
Basic operating commands:
Docker swarm join-token [manager | worker]: generates a token, which can be a manager identity or a worker identity. [root@docker01] # docker swarm join-token manager To add a manager to this swarm, run the following command:docker swarm join--token SWMTKN-1-0tx0cf540mq3stxknq8xlv2183ymeeld9zvxen7x1tepw1z2un-ca100vimkqxp3d2ka30o2y0fi 192.168.1.10 run the following command:docker swarm join- 2377
Operations on docker02 and docker03:
[root@docker03 ~] # docker swarm join-- token SWMTKN-1-0tx0cf540mq3stxknq8xlv2183ymeeld9zvxen7x1tepw1z2un-ca100vimkqxp3d2ka30o2y0fi 192.168.1.10:2377This node joined a swarm as a manager.docker node demote (demoted): downgrade the manager of this swram node to work.docker node promote (promotion): upgrade the work of the swarm node to manager.
Operations on docker01:
[root@docker01 ~] # docker node demote docker02Manager docker02 demoted in the swarm. [root@docker01 ~] # docker node demote docker03Manager docker03 demoted in the swarm. [root@docker01 ~] # docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSIONkzwsrnrij3gauh9kkzio2ktab * docker01 Ready Active Leader 18.09.0c39620kukfj2lgjm53yw7srq5 docker02 Ready Active 18.09.0mtghji5h2muwrxrb1j174u66p docker03 Ready Active 18.09.02) deploy docker swarm cluster network
Overlay: overlay network.
Docker01 operation:
[root@docker01] # docker network create-d overlay-- attachable dockersfot05jf5hkjjdx1el56ffc9e
/ / attachble: this parameter must be added, otherwise it cannot be used in the container.
When we created the network, we didn't deploy a storage service, such as consul, because docker swarm comes with storage.
3) deploy a graphical webUI interface. Docker01 imports myvisualizer.tar image [root@docker01 ~] # docker load
< myvisualizer.tar [root@docker01 ~]# docker run -d -p 8080:8080 -e HOST=192.168.1.10 -e PORT=8080 -v /var/run/docker.sock:/var/run/docker.sock --name visualizer dockersamples/visualizer然后可以通过浏览器访问验证: 如果网页访问不到,需要开启路由转发:[root@docker01 ~]# echo net.ipv4.ip_forward = 1 >> / etc/sysctl.conf [root@docker01 ~] # sysctl-pnet.ipv4.ip_forward = 14) create service all three docker operate [root@docker01 ~] # docker service create-- replicas 1-- network docker-- name web1-p 80 nginx:latest
/ /-- replicas: number of copies.
It can be understood that a copy is equal to a container.
/ / View service: [root @ docker01 ~] # docker service lsID NAME MODE REPLICAS IMAGE PORTSivihvmk98bz5 web1 replicated 1 nginx:latest *: 80-> 80/tcp// View service Information: [root ~] # docker service ps web1 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTSw9u7bun0cg1f web1.1 nginx:latest docker03 Running Running 4 minutes ago / / create 5 containers:
[root@docker01] # docker service create-- replicas 5-- network docker-- name web-p 80 nginx:latest
After suspending docker02:
After suspending docker03: all go to docker01
But when docker02 and docker03 are started, the work can't go back.
/ / delete containers: [root@docker01 ~] # docker service rm web1web1// add or decrease containers: [root@docker01 ~] # docker service scale web=8// set manager node not to work: [root@docker01 ~] # docker node update-- availability drain docker01docker015) build a private warehouse docker pull registry:2docker run-itd-- name registry-- restart=always-p 5000docker service scale web=8// 5000-v / registry:/var/lib/registry registry:2docker pull busyboxdocker tag busybox:latest 192.168.1. 10:5000/busyboxvim / usr/lib/systemd/system/docker.service 13 line: ExecStart=/usr/bin/dockerd-- insecure-registry 192.168.1.10:5000systemctl daemon-reload systemctl restart docker docker push 192.168.1.10:5000/busybox:latest 6) Custom Image
Requirements: based on the httpd image, change the content of the main access interface. The image tag version is v1Magentin v2Powerv3, and the corresponding host content is 111222333.
[root@docker01 ~] # mkdir {v1Magneur v3} [root@docker01 v1] # cd [root@docker01 ~] # cd v1 / [root@docker01 v1] # cat index.html 11111111111111111 # # and v2 V3 publishes a service for 222 [root@docker01 v1] # cat Dockerfile FROM httpdADD index.html / usr/local/apache2/htdocs/index.html [root@docker01 v1] # docker build-t 192.168.1.10:5000/httpd:v1. [root@docker01 v1] # docker push 192.168.1.10:5000/httpd:v17) Based on the above mirror image
Requirement: the number of copies is 3. The name of the service is bdqn.
[root@docker01 v1] # docker service create-- replicas 3-- name bdqn-p 80:80 192.168.1.10:5000/httpd:v1
PS: go to docker02 and docker03 and check with docker service ls to see if there are three bdqnails!
This is the interface for Internet access:
Docker02:
[root@docker02 ~] # docker exec-it 915bb2da7d43 / bin/bashroot@915bb2da7d43:/usr/local/apache2# cd htdocs/root@915bb2da7d43:/usr/local/apache2/htdocs# echo 12345 > index.html [root@docker02 ~] # curl 127.0.0.154321
Docker03:
[root@docker03 ~] # docker exec-it bdqn.1.kaksxkdur0fhypukm8q2zms3i / bin/bashroot@31c5f6af1259:/usr/local/apache2# cd htdocs/root@31c5f6af1259:/usr/local/apache2/htdocs# echo 54321 > index.html root@31c5f6af1259:/usr/local/apache2/htdocs# exitexit [root@docker03 ~] # curl 127.0.0.154321
After that, verify:
[root@docker01 ~] # curl 127.0.1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
The default ingress network, including the custom overlay network created, provides a unified entrance to the container that the back end really provides services to users.
Expansion and reduction of service [root@docker01] # docker service create-- replicas 3-- name test-p 80 192.168.1.10:5000/httpd:v1
The port range 30000-32767 is mapped immediately
[root@docker01 ~] # docker service scale bdqn=6
You can set the number of replicas directly through scale.
Service upgrade and rollback upgrade operation [root@docker01 ~] # docker service update-- image 192.168.1.10:5000/httpd:v2 bdqn
/ / smooth updates.
[root@docker01] # docker service update--image 192.168.1.10:5000/httpd:v3-- update-parallelism 2-- update-delay 1m bdqn
PS: by default, swarm updates only one copy at a time, and there is no wait time between the two replicas. We can set the number of replicas to update in parallel with-- update-parallelism.
-- update-delay: specifies the interval between rolling updates.
Rollback operation [root@docker01 ~] # docker service rollback bdqn
PS: note the rollback operation of docker swarm. By default, it can only be rolled back to the state of the previous operation, not continuously.
Small experiment:
3 docker:
Docker01 docker02 docker03
192.168.1.10 192.168.1.20 192.168.1.30
one hundred and ninety three。
Deploy a swarm cluster, requiring 3 dockerhost,docker01 to be manager node,02 and 03 to be worker node.
All three docker do:
DNS domain name resolution
[root@docker01 ~] # vim / etc/hosts
192.168.1.10 docker01
192.168.1.20 docker02
192.168.1.30 docker03
Initialize:
[root@docker01] # docker swarm init-- advertise-addr 192.168.1.10
Docker02 joins the cluster:
[root@docker02] # docker swarm join-- token SWMTKN-1-3rtvbfgl70u9fndd02kazcne3ib7zqzfhrx7v1ty2ebmod4ex6-2xe4chwto2m04mcwcn601zn54 192.168.1.10 token SWMTKN-1 2377
Docker03 joins the cluster:
[root@docker03] # docker swarm join-- token SWMTKN-1-3rtvbfgl70u9fndd02kazcne3ib7zqzfhrx7v1ty2ebmod4ex6-2xe4chwto2m04mcwcn601zn54 192.168.1.10 token SWMTKN-1 2377
[root@docker01 ~] # docker node ls
Myvisualizer.tar image is required, which can be found on the Internet.
Import myvisualizer.tar into the docker image:
[root@docker01 ~] # docker load < myvisualizer.tar
Run myvisualizer.tar
[root@docker01] # docker run-d-p 8080 HOST=192.168.1.10-e PORT=8080-v / var/run/docker.sock:/var/run/docker.sock-- name visualizer dockersamples/visualizer
Access authentication:
Deploying a servcie service requires the use of a httpd image with the name test. Exe. Eight copies are required. And the swarm cluster requires that manager nodes do not participate in the work.
Need httpd.tar package, which can be downloaded from the Internet
Import httpd.tar into the docker image:
[root@docker01 ~] # docker load < httpd.tar
Create a network card called docker with a network card type of overlay:
[root@docker01] # docker network create-d overlay-- attachable docker
Use httpd image to create a test container with 8 NICs as docker:
[root@docker01] # docker service create-- replicas 8-- network docker-- name test-p 80 httpd:latest
Verification
Keep docker01 out of work
[root@docker01] # docker node update-- availability drain docker01
Docker01
Verify:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.