Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build Kafka 0.10.1.0 Cluster and how to operate Topic simply

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

How to build Kafka 0.10.1.0 Cluster and how to operate Topic? in view of this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.

[kafka cluster machine]:

Machine name user name

Sht-sgmhadoopdn-01/02/03 root

[installation directory]: / root/learnproject/app

1. Synchronize the scala folder to other machines in the cluster (scala version 2.11, which can be downloaded and decompressed separately)

[root@sht-sgmhadoopnn-01 app] # scp-r scala root@sht-sgmhadoopdn-01:/root/learnproject/app/

[root@sht-sgmhadoopnn-01 app] # scp-r scala root@sht-sgmhadoopdn-02:/root/learnproject/app/

[root@sht-sgmhadoopnn-01 app] # scp-r scala root@sht-sgmhadoopdn-03:/root/learnproject/app/

# Environment variables

[root@sht-sgmhadoopdn-01 app] # vi / etc/profile

Export SCALA_HOME=/root/learnproject/app/scala

Export PATH=$SCALA_HOME/bin:$HADOOP_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH

[root@sht-sgmhadoopdn-02 app] # vi / etc/profile

Export SCALA_HOME=/root/learnproject/app/scala

Export PATH=$SCALA_HOME/bin:$HADOOP_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH

[root@sht-sgmhadoopdn-02 app] # vi / etc/profile

Export SCALA_HOME=/root/learnproject/app/scala

Export PATH=$SCALA_HOME/bin:$HADOOP_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH

[root@sht-sgmhadoopdn-01 app] # source / etc/profile

[root@sht-sgmhadoopdn-02 app] # source / etc/profile

[root@sht-sgmhadoopdn-03 app] # source / etc/profile

two。 Download kafka version 0.10.1.0 based on Scala 2.11

[root@sht-sgmhadoopdn-01 app] # pwd

/ root/learnproject/app

[root@sht-sgmhadoopdn-01 app] # wget http://www-eu.apache.org/dist/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz

[root@sht-sgmhadoopdn-01 app] # tar xzvf kafka_2.11-0.10.1.0.tgz

[root@sht-sgmhadoopdn-01 app] # mv kafka_2.11-0.10.1.0 kafka

3. Create logs directory and modify server.properties (provided that zookeeper cluster is deployed, see "03 [online Log Analysis] hadoop-2.7.3 compilation and Building Cluster Environment (HDFS HA,Yarn HA)")

[root@sht-sgmhadoopdn-01 app] # cd kafka

[root@sht-sgmhadoopdn-01 kafka] # mkdir logs

[root@sht-sgmhadoopdn-01 kafka] # cd config/

[root@sht-sgmhadoopdn-01 config] # vi server.properties

Broker.id=1

Port=9092

Host.name=172.16.101.58

Log.dirs=/root/learnproject/app/kafka/logs

Zookeeper.connect=172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka

4. Synchronize to 02Universe 03 server, change broker.id and host.name

[root@sht-sgmhadoopdn-01 app] # scp-r kafka sht-sgmhadoopdn-03:/root/learnproject/app/

[root@sht-sgmhadoopdn-01 app] # scp-r kafka sht-sgmhadoopdn-03:/root/learnproject/app/

[root@sht-sgmhadoopdn-02 config] # vi server.properties

Broker.id=2

Port=9092

Host.name=172.16.101.59

[root@sht-sgmhadoopdn-03 config] # vi server.properties

Broker.id=3

Port=9092

Host.name=172.16.101.60

5. Environment variable

[root@sht-sgmhadoopdn-01 kafka] # vi / etc/profile

Export KAFKA_HOME=/root/learnproject/app/kafka

Export PATH=$KAFKA_HOME/bin:$SCALA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH

[root@sht-sgmhadoopdn-01 kafka] # scp / etc/profile sht-sgmhadoopdn-02:/etc/profile

[root@sht-sgmhadoopdn-01 kafka] # scp / etc/profile sht-sgmhadoopdn-03:/etc/profile

[root@sht-sgmhadoopdn-01 kafka] #

[root@sht-sgmhadoopdn-01 kafka] # source / etc/profile

[root@sht-sgmhadoopdn-02 kafka] # source / etc/profile

[root@sht-sgmhadoopdn-03 kafka] # source / etc/profile

6. Start / stop

[root@sht-sgmhadoopdn-01 kafka] # nohup kafka-server-start.sh config/server.properties &

[root@sht-sgmhadoopdn-02 kafka] # nohup kafka-server-start.sh config/server.properties &

[root@sht-sgmhadoopdn-03 kafka] # nohup kafka-server-start.sh config/server.properties &

# stop

Bin/kafka-server-stop.sh

- -

7.topic related operations

a. Create a topic. If you can successfully create a topic, the cluster installation is complete. You can also use the jps command to check whether the kafka process exists.

[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-topics.sh-create-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-replication-factor 3-partitions 1-topic test

b. View the created topic through the list command:

[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-topics.sh-list-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka

c. View the created Topic

[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-topics.sh-describe-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-topic test

Topic:test PartitionCount:1 ReplicationFactor:3 Configs:

Topic: test Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2

[root@sht-sgmhadoopdn-01 kafka] #

The first row shows the overall situation of the topic, such as the topic name, the number of partitions, the number of copies, and so on.

At the beginning of the second line, each line lists information about a partition, such as which partition it is, which broker is the leader of the partition, which broker the copy is in, and which copy handles the synchronization status.

Partition: partition

Leader: the node responsible for reading and writing the specified partition

Replicas: copy the list of nodes for the partition log

Isr: "in-sync" replicas, currently active replica list (is a subset), and may become Leader

We can use the bin/kafka-console-producer.sh and bin/kafka-console-consumer.sh scripts that come with Kafka to verify the demonstration if the message is published and consumed.

d. Delete topic

[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-topics.sh-delete-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-topic test

e. Modify topic

You can modify any configuration in principle using-- alert. Here are some common modification options:

(1) change the number of partitions

[root@sht-sgmhadoopdn-02 kafka] # bin/kafka-topics.sh-alter-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-topic test-partitions 3

[root@sht-sgmhadoopdn-02 kafka] # bin/kafka-topics.sh-describe-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-topic test

Topic:test PartitionCount:3 ReplicationFactor:3 Configs:

Topic: test Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2

Topic: test Partition: 1 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3

Topic: test Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1

[root@sht-sgmhadoopdn-02 kafka] #

(2) add, modify or delete a configuration parameter

Bin/kafka-topics.sh-alter-zookeeper 192.168.172.98:2181/kafka-topic my_topic_name-config key=value

Bin/kafka-topics.sh-alter-zookeeper 192.168.172.98:2181/kafka-topic my_topic_name-deleteConfig key

8. Simulation experiment 1

On a terminal, start Producer and produce the message to the Topic we created above, named my-replicated-topic5, and execute the following script:

[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-console-producer.sh-- broker-list 172.16.101.58 broker-list 9092172.16.101.59 broker-list 9092172.101.60 topic test

On the other terminal, start Consumer and subscribe to the message produced in the Topic named my-replicated-topic5 we created above, and execute the following script:

[root@sht-sgmhadoopdn-02 kafka] # bin/kafka-console-consumer.sh-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-from-beginning-topic test

You can enter a string message line on the Producer terminal and you can see the message content consumed by the consumer on the Consumer terminal.

This is the answer to the question about how to build Kafka 0.10.1.0 Cluster and the simple operation of Topic. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report