In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
The use and error Resolution of Kafka
Download kafka and decompress: configure environment variables
Vim / etc/profileexport KAFKA_HOME=/root/kafka_2.11-1.0.0export PATH=$PATH:$KAFKA_HOME/binsource / etc/profile
2. Zookeeper is needed in kafka.
(1) use the zookeeper that comes with kafka
Start zookeeper first, if under pseudo-distributed, kafka has integrated zk, in the config directory in kafka.
You can edit the config/zookeeper.properties to modify the port number of the zookeeper.
Start zookeeper at the background:
[root@mail bin] # nohup zookeeper-server-start.sh.. / config/zookeeper.properties &
Start Broker`[ root @ mail bin] # nohup kafka-server-start.sh.. / config/server.properties & `
3. Testing: simulating the consumption and production of messages
(1) create a theme
[root@mail bin] # kafka-topics.sh-create-zookeeper localhost:2281-topic KafkaTestTopic-partitions 1-replication-factor 1Created topic "KafkaTestTopic".
(2) create a producer
[root@mail bin] # kafka-console-producer.sh-topic KafkaTestTopic-broker-list localhost:9092
Check the # listeners=PLAINTEXT://:9092 in server.properties to get the port of kafka
(3) create consumers
[root@mail bin] # kafka-console-consumer.sh-topic KafkaTestTopic-zookeeper localhost:2281
(2) use zookeeper that does not come with kafka
Use zookeeper (not included with kafka) [root@mail zookeeper-3.4.10] # bin/zkServer.sh start conf/zoo.cfg ZooKeeper JMX enabled by defaultUsing config: conf/zoo.cfgStarting zookeeper. STARTED (1) create a theme [root@mail kafka_2.11-1.0.0] # bin/kafka-topics.sh-- create-- zookeeper localhost:2181-- topic secondTopic-- partitions 1-- replication-factor 1Created topic "secondTopic". (2) kafka launch [root@mail kafka_2.11-1.0.0] # nohup bin/kafka-server-start.sh config/server.properties & (3) kafka producer [root@mail kafka_2.11-1.0 .0] # kafka-console-producer.sh-topic KafkaTestTopic-broker-list localhost:9092 (4) kafka Consumer [root@mail kafka_2.11-1.0.0] # bin/kafka-console-consumer.sh-- topic KafkaTestTopic-- zookeeper localhost:2181Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper]. (5) View the data in kafka [root@mail kafka_2.11-1.0.0] # lsbin config libs LICENSE logs logs-kafka nohup.out NOTICE site-docs [root@mail kafka_2.11-1.0.0] # cd logs-kafka/ # kafka data storage directory # # this directory is configured in the config/server.properties file of kafka Log.dirs=/root/kafka/kafka_2.11-1.0.0/logs-kafka [root@mail logs-kafka] # ls # View the topic cleaner-offset-checkpoint _ _ consumer_offsets-20 _ _ consumer_offsets-33 _ _ consumer_offsets-46 kafka_test-0__consumer_offsets-0 _ consumer_offsets-21 _ _ consumer_offsets-34 _ _ consumer_offsets-47 KafkaTestTopic-0_ in kafka _ consumer_offsets-1 _ _ consumer_offsets-22 _ _ consumer_offsets-35 _ _ consumer_offsets-48 log-start-offset-checkpoint__consumer_offsets-10 _ _ consumer_offsets-23 _ _ consumer_offsets-36 _ _ consumer_offsets-49 meta.properties__consumer_offsets-11 _ _ consumer_offsets-24 _ _ consumer_offsets-37 _ _ consumer_offsets-5 My_LOVE_TOPIC-0__consumer_ Offsets-12 _ _ consumer_offsets-25 _ _ consumer_offsets-38 _ _ consumer_offsets-6 mytopic-0__consumer_offsets-13 _ _ consumer_offsets-26 _ _ consumer_offsets-39 _ _ consumer_offsets-7 recovery-point-offset-checkpoint__consumer_offsets-14 _ _ consumer_offsets-27 _ _ consumer_offsets-4 _ _ consumer_offsets-8 replication-offset-checkpoint__consumer_offsets-15 _ _ consumer_offsets-28 _ _ consumer_offsets-40 _ _ consumer_offsets-9 stock-quotation-0__consumer_offsets-16 _ _ consumer_offsets-29 _ _ consumer_offsets-41 hello-0 stock-quotation-avro-0__consumer_offsets-17 _ _ consumer_offsets-3 _ _ consumer_offsets-42 hello-1 stock-quotation-partition-0__consumer_ Offsets-18 _ _ consumer_offsets-30 _ _ consumer_offsets-43 hello-2 TEST-TOPIC-0__consumer_offsets-19 _ _ consumer_offsets-31 _ _ consumer_offsets-44 hello-3__consumer_offsets-2 _ _ consumer_offsets-32 _ _ consumer_offsets-45 hello-4 [root@mail logs-kafka] # cd KafkaTestTopic-0/ # View the theme of kakfa for KafkaTestTopic Data storage file in partition 0 [root@mail KafkaTestTopic-0] # ls00000000000000000000.index 00000000000000000000.timeindex leader-epoch-checkpoint00000000000000000000.log 00000000000000000063.snapshot [root@mail KafkaTestTopic-0] # tail-f 000000000000000000.log # kafka (6) modify the number of partitions in kafka Observe the changes in kafka # # modify the number of kafka partitions [root@mail kafka_2.11-1.0.0] # bin/kafka-topics.sh-- zookeeper localhost:2181-- alter-- topic KafkaTestTopic-- partitions 3WARNING: If partitions are increased for a topic that has a key The partition logic or ordering of the messages will be affectedAdding partitions succeeded! [root@mail kafka_2.11-1.0.0] # lsbin config libs LICENSE logs logs-kafka nohup.out NOTICE site-docs [root@mail kafka_2.11-1.0.0] # cd logs-kafka/# found that the theme of kakfa is partition 0, partition 1, partition 2 of KafkaTestTopic A total of 3 partitions [root@mail logs-kafka] # lscleaner-offset-checkpoint _ _ consumer_offsets-20 _ _ consumer_offsets-33 _ _ consumer_offsets-46 kafka_test-0__consumer_offsets-0 _ _ consumer_offsets-21 _ _ consumer_offsets-34 _ consumer_offsets-47 KafkaTestTopic-0__consumer_offsets-1 _ _ consumer_offsets-22 _ _ consumer_offsets-35 _ _ consumer_offsets-48 KafkaTestTopic- 1__consumer_offsets-10 _ _ consumer_offsets-23 _ _ consumer_offsets-36 _ _ consumer_offsets-49 KafkaTestTopic-2__consumer_offsets-11 _ _ consumer_offsets-24 _ _ consumer_offsets-37 _ _ consumer_offsets-5 log-start-offset-checkpoint__consumer_offsets-12 _ _ consumer_offsets-25 _ _ consumer_offsets-38 _ _ consumer_offsets-6 meta.properties__consumer_offsets -13 _ _ consumer_offsets-26 _ consumer_offsets-39 _ _ consumer_offsets-7 My_LOVE_TOPIC-0__consumer_offsets-14 _ _ consumer_offsets-27 _ _ consumer_offsets-4 _ _ consumer_offsets-8 mytopic-0__consumer_offsets-15 _ _ consumer_offsets-28 _ _ consumer_offsets-40 _ _ consumer_offsets-9 recovery-point-offset-checkpoint__consumer_offsets- 16 _ _ consumer_offsets-29 _ _ consumer_offsets-41 hello-0 replication-offset-checkpoint__consumer_offsets-17 _ _ consumer_offsets-3 _ _ consumer_offsets-42 hello-1 stock-quotation-0__consumer_offsets-18 _ _ consumer_offsets-30 _ _ consumer_offsets-43 hello-2 stock-quotation-avro-0__consumer_ Offsets-19 _ _ consumer_offsets-31 _ consumer_offsets-44 hello-3 stock-quotation-partition-0__consumer_offsets-2 _ _ consumer_offsets-32 _ consumer_offsets-45 hello-4 TEST-TOPIC-0 [root@mail KafkaTestTopic-1] # ls # View the 00000000000000000000.index 00000000000000000000.log 00000000000000000000.timeindex of partition 1 of kakfa whose theme is KafkaTestTopic Leader-epoch-checkpoint [root@mail KafkaTestTopic-1] # tail-f 00000000000000000000.log
3. Possible errors:
(1)
[root@mail bin] # kafka-topics.sh-create-zookeeper localhost:2281-topic KafkaTestTopic-partitions 1-replication-factor 1
Error while executing topic command: Replication factor: 1 larger than available brokers: 0.
[2018-11-20 16 ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.
(kafka.admin.TopicCommand$)
Solution: modify: zookeeper.connect=localhost:2281 in server.properties so that the port number of 2281 is the same as the port number of zookeeper in zookeeper.properties, and then restart kafka. **
(2)
Kafka.common.KafkaException: fetching topic metadata for topics [Set (KafkaTestTopic)] from broker [ArrayBuffer (BrokerEndPoint (0123.125.50.7Magne9092))] failed
At kafka.client.ClientUtils$.fetchTopicMetadata (ClientUtils.scala:77)
At kafka.client.ClientUtils$.fetchTopicMetadata (ClientUtils.scala:98)
At kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork (ConsumerFetcherManager.scala:67)
(3)
[2018-11-20 17 Error while fetching metadata with correlation id 28 WARN [Producer clientId=console-producer] 53411: {KafkaTestTopic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-11-20 17 WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 53: {KafkaTestTopic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-11-20 17 WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 54: {KafkaTestTopic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2018-11-20 17 Error while fetching metadata with correlation id 28 WARN [Producer clientId=console-producer] 53721: {KafkaTestTopic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Resolve errors in (2) and (3):
Modify the
I 、 listeners=PLAINTEXT://localhost:9092
II 、 advertised.listeners=PLAINTEXT://localhost:9092
(4) WARN [Producer clientId=console-producer] Connection to node-1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Solution: possible reason: kafka did not start, restart to start kafka.
View zookeeper status in kafka:
Bin/zookeeper-shell.sh localhost:2181
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.