In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "how to build and deploy zookeeper+kafka in docke swarm". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn how to build and deploy zookeeper+kafka in docke swarm.
1. Machine preparation
Prepare three machines, IP is 192.168.10.51,192.168.10.52,192.168.10.53; hostnames are centos51, centos52, centos53. Three machines are ready for the docker swarm environment. For docker swarm building, you can refer to another article, docker swarm cluster building.
two。 Prepare for mirroring.
Pull zookeeper, kafka and kafka manager related images on https://hub.docker.com/
Docker pull zookeeper:3.6.1docker pull wurstmeister/kafka:2.12-2.5.0docker pull kafkamanager/kafka-manager:3.0.0.4
3. Prepare zookeeper related compose.
File name: docker-stack-zookeeper.yml
Version: "3.2" services:#zookeeper Service zookeeper-server-a: hostname: zookeeper-server-an image: zookeeper:3.6.1 ports:-"12181 Asia/Shanghai ZOO_MY_ID 2181" networks: swarm-net: aliases:-zookeeper-server-an environment: TZ: Asia/Shanghai ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zookeeper-server-a:2888:3888;2181 server.2=zookeeper-server-b:2888:3888 2181 server.3=zookeeper-server-c:2888:3888 2181 volumes:-/ data/kafka_cluster/zookeeper/data:/data deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.hostname = = centos51] resources: limits:# cpus:'1' memory: 1GB reservations:# cpus: '0.2' memory : 512m zookeeper-server-b: hostname: zookeeper-server-b image: zookeeper:3.6.1 ports:-"22181 ZOO_SERVERS 2181" networks: swarm-net: aliases:-zookeeper-server-b environment: TZ: Asia/Shanghai ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zookeeper-server-a:2888:3888 2181 server.2=zookeeper-server-b:2888:3888;2181 server.3=zookeeper-server-c:2888:3888 2181 volumes:-/ data/kafka_cluster/zookeeper/data:/data deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.hostname = = centos52] resources: limits:# cpus:'1' memory: 1GB reservations:# cpus: '0.2' memory : 512m zookeeper-server-c: hostname: zookeeper-server-c image: zookeeper:3.6.1 ports:-"32181 ZOO_SERVERS 2181" networks: swarm-net: aliases:-zookeeper-server-c environment: TZ: Asia/Shanghai ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zookeeper-server-a:2888:3888 2181 server.2=zookeeper-server-b:2888:3888;2181 server.3=zookeeper-server-c:2888:3888 2181 volumes:-/ data/kafka_cluster/zookeeper/data:/data deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.hostname = = centos53] resources: limits:# cpus:'1' memory: 1GB reservations:# cpus: '0.2' memory : 512Mnetworks: swarm-net: external: name: swarm-net
4. Kafka related compose preparation
File name: docker-stack-kafka.yml
Version: "3.2" services:#kafka Service kafka-server-a: hostname: kafka-server-an image: wurstmeister/kafka:2.12-2.5.0 ports:-"19092 kafka-server-a: swarm-net: aliases:-kafka-server-an environment:-TZ=CST-8-KAFKA_ADVERTISED_HOST_NAME=kafka-server-a-HOST_IP=kafka- Server-a-KAFKA_ADVERTISED_PORT=9092-KAFKA_ZOOKEEPER_CONNECT=zookeeper-server-a:2181 Zookeeper-server-b:2181 Zookeeper-server-c:2181-KAFKA_BROKER_ID=0-KAFKA_HEAP_OPTS= "- Xmx512M-Xms16M" volumes:-/ data/kafka_cluster/kafka/data:/kafka/kafka-logs-kafka-server-a-/ data/kafka_cluster/kafka/logs:/opt/kafka/logs deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.hostname = = centos51] resources: limits:# cpus:'1' memory: 1GB reservations:# cpus: '0.2' memory: 512m kafka-server-b: hostname: kafka-server-b image: wurstmeister/kafka:2.12-2.5.0 ports:-"29092 ports" networks: swarm-net: Aliases:-kafka-server-b environment:-TZ=CST-8-KAFKA_ADVERTISED_HOST_NAME=kafka-server-b-HOST_IP=kafka-server-b-KAFKA_ADVERTISED_PORT=9092-KAFKA_ZOOKEEPER_CONNECT=zookeeper-server-a:2181 Zookeeper-server-b:2181 Zookeeper-server-c:2181-KAFKA_BROKER_ID=1-KAFKA_HEAP_OPTS= "- Xmx512M-Xms16M" volumes:-/ data/kafka_cluster/kafka/data:/kafka/kafka-logs-kafka-server-b-/ data/kafka_cluster/kafka/logs:/opt/kafka/logs deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.hostname = = centos52] resources: limits:# cpus:'1' memory: 1GB reservations:# cpus: '0.2' memory: 512m kafka-server-c: hostname: kafka-server-c image: wurstmeister/kafka:2.12-2.5.0 ports:-"39092 wurstmeister/kafka:2.12 9092" networks: swarm-net: Aliases:-kafka-server-c environment:-TZ=CST-8-KAFKA_ADVERTISED_HOST_NAME=kafka-server-c-HOST_IP=kafka-server-c-KAFKA_ADVERTISED_PORT=9092-KAFKA_ZOOKEEPER_CONNECT=zookeeper-server-a:2181 Zookeeper-server-b:2181 Zookeeper-server-c:2181-KAFKA_BROKER_ID=2-KAFKA_HEAP_OPTS= "- Xmx512M-Xms16M" volumes:-/ data/kafka_cluster/kafka/data:/kafka/kafka-logs-kafka-server-c-/ data/kafka_cluster/kafka/logs:/opt/kafka/logs deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.hostname = = centos53] resources: limits:# cpus:'1' memory: 1GB reservations:# cpus: '0.2' memory: 512Mnetworks: swarm-net: external: name: swarm-net
5. Preparation of kafka manager related compose
File name: docker-stack-kafka-manager.yml
Version: "3.2" services:#kafka manager Service kafka-manager: hostname: kafka-manager image: kafkamanager/kafka-manager:3.0.0.4 ports:-"19000 kafka-manager image 9000" networks: swarm-net: aliases:-kafka-manager environment:-ZK_HOSTS=zookeeper-server-a:2181,zookeeper-server-b:2181 Zookeeper-server-c:2181 deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.hostname = = centos51] resources: limits:# cpus:'1' memory: 1GB reservations:# cpus: '0.2' memory: 512Mnetworks: avatar-net: external: name: swarm-net
6. Create file mapping paths on three machines
Mkdir-p {/ data/kafka_cluster/zookeeper/data,/data/kafka_cluster/kafka/data,/data/kafka_cluster/kafka/logs} chown-R 777 / data/kafka_cluster/
7. Execute compose
Be sure to execute in order, and then execute the next command successfully.
Docker stack deploy-c docker-stack-zookeeper.yml zoo-- resolve-image=never-- with-registry-authdocker stack deploy-c docker-stack-kafka.yml kafka--resolve-image=never-- with-registry-authdocker stack deploy-c docker-stack-kafka-manager.yml kafka_manager-- resolve-image=never-- with-registry-auth Thank you for your reading. This is the content of "how to build and deploy zookeeper+kafka in docke swarm". After the study of this article, I believe you have a deeper understanding of how to build and deploy zookeeper+kafka for docke swarm, and the specific usage needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.