In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
[kafka cluster machine]:
Machine name user name
Sht-sgmhadoopdn-01/02/03 root
[installation directory]: / root/learnproject/app
1. Synchronize scala folders to other machines in the cluster (scala version 2.11, which can be downloaded and decompressed separately) [root@sht-sgmhadoopnn-01 app] # scp-r scala root@sht-sgmhadoopdn-01:/root/learnproject/app/ [root@sht-sgmhadoopnn-01 app] # scp-r scala root@sht-sgmhadoopdn-02:/root/learnproject/app/ [root@sht-sgmhadoopnn-01 app] # scp-r scala root@sht-sgmhadoopdn-03:/root/learnproject/app/
# Environment variables
[root@sht-sgmhadoopdn-01 app] # vi / etc/profileexport SCALA_HOME=/root/learnproject/app/scalaexport PATH=$SCALA_HOME/bin:$HADOOP_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$ path [root @ sht-sgmhadoopdn-02 app] # vi / etc/profileexport SCALA_HOME=/root/learnproject/app/scalaexport PATH=$SCALA_HOME/bin:$HADOOP_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$ path [root @ sht-sgmhadoopdn-02 app] # vi / etc/profileexport SCALA_HOME=/root/learnproject/app/scalaexport PATH=$SCALA_HOME/bin:$HADOOP_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$ path [root @ sht-sgmhadoopdn-01 app] # source / etc/profile [root@sht-sgmhadoopdn-02 app] # source / etc/profile [root@sht-sgmhadoopdn-03 app] # source / etc/profile2. Download Scala 2.11-based kafka version 0.10.1.0 [root@sht-sgmhadoopdn-01 app] # pwd/root/learnproject/app [root@sht-sgmhadoopdn-01 app] # wget http://www-eu.apache.org/dist/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz[root@sht-sgmhadoopdn-01 app] # tar xzvf kafka_2.11-0.10.1.0.tgz [root@ Sht-sgmhadoopdn-01 app] # mv kafka_2.11-0.10.1.0 kafka3. Create the logs directory and modify the server.properties (provided that zookeeper cluster deploys [root@sht-sgmhadoopdn-01 app] # cd kafka [root@sht-sgmhadoopdn-01 kafka] # mkdir logs [root@sht-sgmhadoopdn-01 kafka] # cd config/ [root@sht-sgmhadoopdn-01 config] # vi server.propertiesbroker.id=1port=9092host.name=172.16.101.58log.dirs=/root/learnproject/app/kafka/logszookeeper.connect=172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka4. Synchronize to 02Universe 03 server and change broker.id and host.name [root@sht-sgmhadoopdn-01 app] # scp-r kafka sht-sgmhadoopdn-03:/root/learnproject/app/ [root@sht-sgmhadoopdn-01 app] # scp-r kafka sht-sgmhadoopdn-03:/root/learnproject/app/ [root@sht-sgmhadoopdn-02 config] # vi server.properties broker.id=2port=9092host.name=172.16.101.59 [root@sht-sgmhadoopdn-03 config] # vi server.properties broker.id=3port=9092host.name=172.16.101.605. Environment variable [root@sht-sgmhadoopdn-01 kafka] # vi / etc/profileexport KAFKA_HOME=/root/learnproject/app/kafkaexport PATH=$KAFKA_HOME/bin:$SCALA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH [root@sht-sgmhadoopdn-01 kafka] # scp / etc/profile sht-sgmhadoopdn-02:/etc/profile [root@sht-sgmhadoopdn-01 kafka] # scp / etc/profile sht-sgmhadoopdn-03:/etc/profile [root@sht-sgmhadoopdn -01 kafka] # [root@sht-sgmhadoopdn-01 kafka] # source / etc/profile [root@sht-sgmhadoopdn-02 kafka] # source / etc/profile [root@sht-sgmhadoopdn-03 kafka] # source / etc/profile6. Start / stop [root@sht-sgmhadoopdn-01 kafka] # nohup kafka-server-start.sh config/server.properties & [root@sht-sgmhadoopdn-02 kafka] # nohup kafka-server-start.sh config/server.properties & [root@sht-sgmhadoopdn-03 kafka] # nohup kafka-server-start.sh config/server.properties & # stop bin/kafka-server-stop.sh7.topic related operations
a. Create a topic. If you can successfully create a topic, the cluster installation is complete. You can also use the jps command to check whether the kafka process exists.
[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-topics.sh-create-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-replication-factor 3-partitions 1-topic test
b. View the created topic through the list command:
[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-topics.sh-list-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka
c. View the created Topic
[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-topics.sh-- describe-- zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-- topic testTopic:test PartitionCount:1 ReplicationFactor:3 Configs: Topic:test Partition: 0 Leader: 3 Replicas: 3 Isr: 3 Isr: 3 root@sht-sgmhadoopdn-01 kafka 1 [root@sht-sgmhadoopdn-01 kafka] # the first row shows the overall situation of this topic Such as topic name, number of partitions, number of copies, etc. At the beginning of the second line, each line lists information about a partition, such as which partition it is, which broker is the leader of the partition, which broker the copy is in, and which copy handles the synchronization status. Partition: partition Leader: nodes responsible for reading and writing the specified partition Replicas: copy the node list of the partition log Isr: "in-sync" replicas, the currently active replica list (is a subset), and may become Leader We can use the bin/kafka-console-producer.sh and bin/kafka-console-consumer.sh scripts included with Kafka to verify the demonstration if the message is released and consumed.
d. Delete topic
[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-topics.sh-delete-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-topic test
e. Modify topic
You can modify any configuration in principle using-- alert. Here are some common modification options:
(1) change the number of partitions
[root@sht-sgmhadoopdn-02 kafka] # bin/kafka-topics.sh-alter-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-topic test-partitions 3 [root@sht-sgmhadoopdn-02 kafka] # bin/kafka-topics.sh-describe-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-topic testTopic:test PartitionCount : 3 ReplicationFactor:3 Configs: Topic: test Partition: 0 Leader: 3 Replicas: 3,1 2 Isr: 3,1,2 Topic: test Partition: 1 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 Topic: test Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1 [root@sht-sgmhadoopdn-02 kafka] #
(2) add, modify or delete a configuration parameter
Bin/kafka-topics.sh-alter-zookeeper 192.168.172.98:2181/kafka-topic my_topic_name-config key=value bin/kafka-topics.sh-alter-zookeeper 192.168.172.98:2181/kafka-topic my_topic_name-deleteConfig key8. Simulation experiment 1
On a terminal, start Producer and produce the message to the Topic we created above, named my-replicated-topic5, and execute the following script:
[root@sht-sgmhadoopdn-01 kafka] # bin/kafka-console-producer.sh-- broker-list 172.16.101.58 broker-list 9092172.16.101.59 broker-list 9092172.101.60 topic test
On the other terminal, start Consumer and subscribe to the message produced in the Topic named my-replicated-topic5 we created above, and execute the following script:
[root@sht-sgmhadoopdn-02 kafka] # bin/kafka-console-consumer.sh-zookeeper 172.16.101.58:2181172.16.101.59:2181172.16.101.60:2181/kafka-from-beginning-topic test
You can enter a string message line on the Producer terminal and you can see the message content consumed by the consumer on the Consumer terminal.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.