Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Steps to build a kafka-2.11 cluster

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "the steps of building kafka-2.11 cluster". In the daily operation, I believe many people have doubts about the steps of building kafka-2.11 cluster. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about the steps of building kafka-2.11 cluster. Next, please follow the editor to study!

Producer: message producer, client that sends messages to kafka broker

Consumer: message consumer, the client that fetches messages from kafka broker

Topic: a category of messages published to the Kafka cluster

Broker: a kafka server is a broker, a cluster consists of multiple broker, and a broker can hold multiple topic

1. Download and install zookeeper (zookeeper and jdk must be installed first)

[root@node1 ~] # wget http://mirror.bit.edu.cn/apache/zookeeper/stable/zookeeper-3.4.13.tar.gz

[root@node1] # tar xvf zookeeper-3.4.13.tar.gz-C / opt/

[root@node1 ~] # cd / opt/zookeeper-3.4.13/conf/

[root@node1 conf] # vim zoo.cfg

TickTime=2000

DataDir=/opt/zookeeper-3.4.13/data

ClientPort=2181

InitLimit=5

SyncLimit=2

Server.1=node1:2888:3888

Server.2=node2:2888:3888

Server.3=node3:2888:3888

[root@node1 conf] # mkdir / opt/zookeeper-3.4.13/data

[root@node1 conf] # cd / opt/zookeeper-3.4.13/data-- myid must be under the data directory, otherwise an error will be reported

[root@node1 data] # cat myid

one

[root@node1 zookeeper-3.4.13] # cd..

[root@node1 opt] # scp-r zookeeper-3.4.13 node2:/opt/

[root@node1 opt] # scp-r zookeeper-3.4.13 node3:/opt/

two。 Modify the myid file in node2

[root@node2 opt] # cat / opt/zookeeper-3.4.13/data/myid

two

[root@node2 opt] #

3. Modify the myid file in node3

[root@node3 ~] # cat / opt/zookeeper-3.4.13/data/myid

three

[root@node3 ~] # zkServer.sh start-each node needs to start the zookeeper service

ZooKeeper JMX enabled by default

Using config: / opt/zookeeper-3.4.13/bin/../conf/zoo.cfg

Starting zookeeper... STARTED

[root@node3 opt] # zkCli.sh-- Log in using the client

3. Download and install kafka (same as all three nodes)

[root@node1 ~] # wget http://mirror.bit.edu.cn/apache/kafka/2.2.0/kafka_2.11-2.2.0.tgz

[root@node1] # tar xvf kafka_2.11-2.2.0.tgz-C / opt/

[root@node1] # cd / opt/kafka_2.11-2.2.0 /

[root@node1 kafka_2.11-2.2.0] # cd config/

[root@node1 config] # vim server.properties

Broker.id=0-each id is different

Zookeeper.connect=172.16.8.23:2181172.16.8.24:2181172.16.8.178:2181-zookeeper cluster IP address

[root@node1 config] # cd / opt/

[root@node1 opt] # scp-r kafka_2.11-2.2.0 / node2:/opt/

[root@node1 opt] # scp-r kafka_2.11-2.2.0 / node3:/opt/

[root@node1 opt] # cd kafka_2.11-2.2.0/bin/

[root@node1 bin] #. / kafka-server-start.sh.. / config/server.properties &-- all three kafka need to start the service at the backend

4. Check to see if the kafka service starts properly

[root@node1 bin] # jps

30851 Kafka

3605 HMaster

12728 QuorumPeerMain

12712 DFSZKFailoverController

31656 Jps

3929 DataNode

15707 JournalNode

32188 NameNode

14335 ResourceManager

[root@node1 bin] # netstat-antulp | grep 30851

Tcp6 0 0: 9092: * LISTEN 30851/java

Tcp6 0 0: 37161: * LISTEN 30851/java

Tcp6 0 0 172.16.8.23:40754 172.16.8.178:9092 ESTABLISHED 30851/java

Tcp6 0 0 172.16.8.23:9092 172.16.8.23:39704 ESTABLISHED 30851/java

Tcp6 0 0 172.16.8.23:45480 172.16.8.24:9092 ESTABLISHED 30851/java

Tcp6 0 0 172.16.8.23:45294 172.16.8.178:2181 ESTABLISHED 30851/java

Tcp6 0 0 172.16.8.23:39704 172.16.8.23:9092 ESTABLISHED 30851/java

[root@node1 bin] #

5. Use the command interface

[root@node1 bin] #. / kafka-topics.sh-- create-- zookeeper node1:2181-- topic tongcheng-- replication-factor 3-- partitions 3-- create topic

Created topic tongcheng.

[root@node1 bin] #. / kafka-topics.sh-- list-- zookeeper node1:2181-- View topic

Tongcheng

[root@node1 bin] #. / kafka-topics.sh-- delete-- zookeeper node1:2181-- topic tongcheng-- Delete topic

Topic tongcheng is marked for deletion.

Note: This will have no impact if delete.topic.enable is not set to true.

[root@node1 bin] # / kafka-topics.sh-- list-- zookeeper node1:2181

[root@node1 bin] #

6. Send / receive messages

[root@node1 bin] # / kafka-console-producer.sh-- broker-list node2:9092-- topic ttt

> tongcheng is goods

> tong is goods

> cheng is goods!

>

-receiver-

[root@node2 bin] # / kafka-console-consumer.sh-- topic ttt-- bootstrap-server node1:9092,node2:9092,node3:9092-- from-beginning

Tongcheng is goods

Tong is goods

Cheng is goods!

[root@node2 bin] # / kafka-topics.sh-- describe-- zookeeper node1:2181-- topic ttt-- View the number of partitions and copies

Topic:tttPartitionCount:1ReplicationFactor:1Configs:

Topic: tttPartition: 0Leader: 0Replicas: 0Isr: 0

[root@node2 bin] #

7. View zookeeper data

[root@node1 bin] #. / zkCli.sh

Connecting to localhost:2181

[zk: localhost:2181 (CONNECTED) 0] ls /

[cluster, controller, brokers, zookeeper, hadoop-ha, admin, isr_change_notification, log_dir_event_notification, controller_epoch, consumers, latest_producer_id_block, config, hbase]

[zk: localhost:2181 (CONNECTED) 1]

8. Receive a group message (when a consumer sends a message, only one recipient in the group can receive the message)

[root@node1 bin] #. / kafka-console-producer.sh-- broker-list node1:9092-- topic tong-- sends a message on the node1 node

>

-start two consumers-

[root@node2 bin] # vim.. / config/consumer.properties-both consumption needs to be modified

Group.id=wuhan

[root@node2 bin] #. / kafka-console-consumer.sh-- topic tong-- bootstrap-server node1:9092-- consumer.config.. / config/consumer.properties

[2019-04-05 20 WARN [Consumer clientId=consumer-1, groupId=wuhan] Error while fetching metadata with correlation id 2:

9. The message is sent at the sending end, and the receiving end group receives the message.

[root@node1 bin] # / kafka-console-producer.sh-- broker-list node1:9092-- topic tong

> [2019-04-05 20 Removed 51 Removed 31094] GroupMetadataManager brokerId=0 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)

[2019-04-05 20 INFO Creating topic tong with configuration {} and initial partition assignment Map (0-> ArrayBuffer (2)) (kafka.zk.AdminZkClient)

[2019-04-05 20 Auto creation of topic tong with 52 partitions and replication factor 09124] INFO [KafkaApi-0] 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)

> hello ttt

>

-receiver-

[root@node2 bin] # / kafka-console-consumer.sh-- topic tong-- bootstrap-server node1:9092-- consumer.config.. / config/consumer.properties-- received a message on the node2 node

[2019-04-05 20 WARN [Consumer clientId=consumer-1, groupId=wuhan] Error while fetching metadata with correlation id 2: {tong=LEADER_NOT_AVAILABLE}

(org.apache.kafka.clients.NetworkClient)

Hello ttt

At this point, the study on the "steps to build a kafka-2.11 cluster" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report