In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Like Docker Compose, Docker Swarm is the official container orchestration project of Docker, but the difference is that Docker Compose is a tool for creating multiple containers on a single server or host, while Docker Swarm can create container cluster services on multiple servers or hosts. Obviously, Docker Swarm will be more suitable for the deployment of micro services. Since Docker version 1.12.0, Docker Swarm has been included in the Docker engine (docker swarm), and the service discovery tool has been built in, so we don't need to configure Etcd or Consul for service discovery configuration as before. There are three roles in the Docker Swarm cluster: manager (manager), worker (actual worker), and service (service). In the above three roles, they are essentially similar to the organizational structure of our company, there are leaders (manager), there are brick movers (worker), and the task assigned by the leader to the brick movers is the service (service) in Docker Swarm. It is important to note that in a Docker Swarm cluster, the role of each docker server can be manager, but it cannot all be worker, that is, it cannot be leaderless, and all hostnames participating in the cluster must not conflict. Docker Swarm planning
Docker install [root@k8s-master01 ~] # wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@k8s-master01 ~] # mv docker-ce.repo / etc/yum.repos.d/ [root@k8s-master01 ~] # yum install-y docker-ce [root@k8s-master01 ~] # systemctl daemon-reload [root @ k8s-master01 ~] # systemctl start docker sets interconnection and open iptables access (all three) [root@k8s-master01 ~] # ssh-copy-id-I ~ / .ssh/id_rsa.pub k8s-master01: [root@k8s-master01 ~] # ssh-copy-id-I ~ / .ssh/id_rsa.pub k8s-node03: [root@k8s-master01 ~] # ssh-copy-id-I / .ssh/id_rsa.pub k8s-node02 [root@k8s-master01 ~] # iptables-An INPUT-p tcp-s 192.168.1.29-j ACCEPT [root@k8s-master01 ~] # iptables-An INPUT-p tcp-s 192.168.1.101-j ACCEPT set k8s-master01 to manage node [root@k8s-master01 ~] # docker swarm init-- advertise-addr 192.168.1.23
Configure the node node to join the Swarm node
Upgrade node to manager [root@k8s-node02 ~] # docker node promote k8s-node02 # upgrade k8s-node02 from worker to manager if docker02 or docker03 wants to break away from the cluster (k8s-node03 node as an example) [root@k8s-node03 .ssh] # docker swarm leave # execute this command on k8s-node03 Node left the swarm. [root@k8s-master01 ~] # docker node rm k8s-node03 # then the manager node removes k8s-node03 [root@k8s-master01 ~] # docker swarm leave-f # if the cluster is deleted on the last manager Then you need to add the "- f" option # after the last deletion There is no Docker Swarm common command [root@k8s-master01 ~] # docker node ls # check the number of nodes ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSIONzdtrj1duj7rz2m0a0w4bh4kww * k8s-master01 Ready Active Leader 19.03.4nnc6df9g6gzwpstjspdvrvl5u k8s-node02 Ready Active 19.03.3-rc1vkxwfe025vp3m3dxyphkfc0u6 k8s-node03 Ready Active 19.03.4 [root@k8s-master01 ~] # docker service ls # View the service service [root@k8s-master01 ~] # docker swarm join-token worker # if you need to join the worker side later You can execute this command to view the token (that is, the command to be executed when joining) [root@k8s-master01 ~] # docker swarm join-token manager # ditto, to add the manager side, you can execute this command to view the token. Dynamic expansion and reduction of [root@k8s-master01 ~] # docker service scale nginx=3 # Container [root@k8s-master01 ~] # docker service ps nginx # View the nodes on which the created container runs [root@k8s-master01 ~] # docker service ls # View the created service [root@k8s-master01 ~] # docker service rm helloworld # delete service [root@k8s-master01 ~] # docker service inspect helloworld # View service Message # detach docker03 from this cluster [root@k8s-master01 ~] # docker swarm leave # k8s-node02 from this cluster [root@k8s-master01 ~] # docker node rm k8s-node02 # then remove k8s-node02 [root@k8s-master01 ~] # docker node promote k8s-node02 # from the server in the manager role and upgrade k8s-node02 from worker to manager. # after upgrade, docker02 status will be Reachable [root@k8s-master01 ~] # docker node demote k8s-node02 # downgrade k8s-node02 from manager role to worker [root@k8s-master01 ~] # docker node update-- availability drain k8s-node02 # set the host k8s-node02 not to run the container But the containers that are already running will not stop building the WEB interface of Docker Swarm [root@k8s-master01] # docker run-d-p 8000 root@k8s-master01 8080-e HOST=172.168.1.3-e PORT=8080-v / var/run/docker.sock:/var/run/docker.sock-- name visualizer dockersamples/visualizer
Set up the service service [root@k8s-master01 ~] # docker service create-- replicas 1-- name helloworld alpine ping docker.com [root@k8s-master01 ~] # docker service ls # View the service ID NAME MODE REPLICAS IMAGE PORTSle5fusj4rses helloworld replicated 1 Union1 alpine:latest [root @ k8s-master01 ~] # docker service inspect-- pretty helloworldID: le5fusj4rsesv6d4ywxwrvwnoName: helloworldService Mode: Replicated Replicas: 1Placement:UpdateConfig: Parallelism: 1 On failure: pause Monitoring Period: 5s Max failure ratio: 0 Update order: stop-firstRollbackConfig: Parallelism: 1 On failure: pause Monitoring Period: 5s Max failure ratio: 0 Rollback order: stop-firstContainerSpec: Image: alpine:latest@sha256:c19173c5ada610a5989151111163d28a67368362762534d8a8121ce95cf2bd5a Args: ping docker.com Init: falseResources:Endpoint Mode Vip the machine on which the service is running at this time Let's expand it and make it run on two work machines [root@k8s-master01 ~] # docker service scale helloworld=3 # expansion container only need to modify the helloword=# number
. Then you can go to each worker to check docker ps-a to check the setting of specific worker do not run containers in the above configuration, if you run a specified number of containers, then all the docker hosts in the cluster will run in a polling manner until the specified number of containers is run. What if you do not want the manager role of k8s-master01 to run containers? You can configure the following: [root@k8s-master01 ~] # docker node update-- availability drain k8s-master01# k8s-master01# sets the host k8s-master01 not to run containers, but containers that are already running will not stop. There are three options that can be configured after the # "--availability" option, as follows: # "active": working; "pause": not working temporarily. "drain": permanently not working in expanding capacity to see [root@k8s-master01 ~] # docker service scale helloworld=6
Create a nginx service [root@k8s-master01 ~] # docker service create-- replicas 2-- name nginx15-p 80 192.168.1.23:5000/nginx:1.15 [root@k8s-master01 ~] # docker service ls ID NAME MODE REPLICAS IMAGE PORTS le5fusj4rses helloworld replicated 6 alpine:latest tw7s5ps953lm nginx15 replicated 2 192.168.1.23:5000/nginx:1.15 *: 30001-> 80/tcp discovers that k8s-master is a routing network of Docker swarm cluster nodes that do not run tasks
Swarm can easily publish services and ports, all nodes participate in the ingress routing network, and the routing network enables each node in the cluster to accept any service on the published port. Even if no service is running on the node, you can run any service in the cluster. The routing network routes all incoming requests to available nodes, that is, living containers.
[root@k8s-master01] # docker service create-- replicas 2-- name nginx-- publish 8081 replicas 80 192.168.1.23:5000/nginx:1.15
[root@k8s-master01 ~] # curl http://192.168.1.29:8081
[root@k8s-master01 ~] # curl http://192.168.1.101:8081
Docker swarm choreographed version: "3. 3" services: redis: image: redis:latest container_name: redis1 hostname: redis1 networks: hanye1: ipv4_address: 172.3.1.2 dns:-"114.114.114.114" ports:-"6379RV 6379" deploy: restart_policy: condition: on-failure replicas: 2 mode: replicated Resources: limits: cpus: "0.2" memory: 100m reservations: cpus: "0.2" memory: 100Mnetworks: hanye1: ipam: driver: default config:-subnet: 172.3.1.1/24docker stack deploy-c docker-swarm.yaml up
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.