In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the knowledge of "how to build a Kafka cluster with Docker containers". Many people will encounter this dilemma in the operation of actual cases, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
First, the construction of Kafka cluster 1. Pull the relevant image docker pull wurstmeister/kafkadocker pull zookeeper
two。 Run zookeeperdocker run-d-- name zookeeper-p 2181-t zookeeper
3. Run kafka
Kafka0:
Docker run-d-- name kafka0-p 9092 KAFKA_BROKER_ID=0-e KAFKA_ZOOKEEPER_CONNECT=192.168.16.129:2181-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.16.129:9092-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092-t wurstmeister/kafka
Kafka1:
Docker run-d-- name kafka1-p 9093 KAFKA_BROKER_ID=1-e KAFKA_ZOOKEEPER_CONNECT=192.168.16.129:2181-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.16.129:9093-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093-t wurstmeister/kafka
Kafka2:
Docker run-d-- name kafka2-p 9094 KAFKA_BROKER_ID=2-e KAFKA_ZOOKEEPER_CONNECT=192.168.16.129:2181-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.16.129:9094-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9094-t wurstmeister/kafka
Parameter description:
-e KAFKA_BROKER_ID=0 in a kafka cluster, each kafka has a BROKER_ID to distinguish itself
-e KAFKA_ZOOKEEPER_CONNECT=10.20.8.50:2181/kafka configure zookeeper to manage the path 10.20.8.50:2181/kafka of kafka
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://10.20.8.50:9092 registers the address port of kafka with zookeeper, and if it is remote access, it should be changed to public network IP.
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 configure the listening port of kafka: this cannot be changed
-v / etc/localtime:/etc/localtime container time synchronizes the time of the virtual machine
Start 3 Kafka nodes
4. Set up topic
Enter kafka0
Docker exec-it kafka0 / bin/bash
Enter the bin directory
Cd / opt/kafka_2.13-2.8.1/bin
Create topic
Kafka-topics.sh-- create-- zookeeper 192.168.16.129 replication-factor 2181-- partitions 5-- topic TestTopic
View topic
Kafka-topics.sh-- describe-- zookeeper 192.168.16.129 topic TestTopic
All the topic partitions of the Kafka will be distributed on different Broker, so the 5 partitions of the topic will be scattered over 3 Broker, of which two Broker will get two partitions and the other Broker will have only one partition, as shown in the figure:
Cluster node description:
Topic: TestTopic PartitionCount: 5 ReplicationFactor:3 represents TestTopic with 5 partitions and 3 replica nodes
Topic: represents the topic name
Leader stands for subject node number
Replicas represents his replica node with Broker.id = 2, 0, 1 (including Leader Replica and Follower Replica, regardless of survival)
Isr means to survive and synchronize the copies of the Leader node are Broker.id = 2, 0, 1
5. Conduct producer and consumer testing
Run a producer on Broker0 and a consumer on Broker1 and 2:
Kafka-console-producer.sh-- broker-list 192.168.16.129 bootstrap-server-- topic TestTopickafka-console-consumer.sh-- bootstrap-server 192.168.16.129-- bootstrap-server-- topic TestTopic-- from-beginningkafka-console-consumer.sh-- bootstrap-server 192.168.16.129-- topic TestTopic-- from-beginning
This is the end of the content of "how to build a Kafka Cluster with Docker Container". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.