In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to solve the problem of kafka consuming less than remote bootstrap-server data". In daily operation, I believe many people have doubts about how to solve the problem of kafka consuming less than remote bootstrap-server data. Xiaobian consulted all kinds of information and sorted out simple and easy to use operation methods. I hope to help you answer the question of "how to solve the problem of kafka consuming less than remote bootstrap-server data"! Next, please follow the small series to learn together!
problem
Implementation of.../ bin/kafka-console-consumer.sh --bootstrap-server 10.10.151.12:6667 --topic flink_test
No data retrieved, no return, no error reported
solve
Use.../ bin/kafka-console-consumer.sh --zookeeper 10.10.151.12:2181 --topic flink_test
Finally got the wrong message.
[2020-12-02 10:06:41,087] WARN [console-consumer-73229_localhost.localdomain-1606874800409-65d73e12-leader-finder-thread]: Failed to add leader for partitions flink_test-5,flink_test-16,flink_test-2,flink_test-13,flink_test-21,flink_test-10,flink_test-15,flink_test-4,flink_test-7,flink_test-18,flink_test-1,flink_test-23,flink_test-12,flink_test-20,flink_test-9,flink_test-6,flink_test-17,flink_test-22,flink_test-3,flink_test-14,flink_test-19,flink_test-8,flink_test-0,flink_test-11; will retry (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:112) at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:101) at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:86) at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:152) at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:191) at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:92) at kafka.server.AbstractFetcherThread$$anonfun$7.apply(AbstractFetcherThread.scala:243) at kafka.server.AbstractFetcherThread$$anonfun$7.apply(AbstractFetcherThread.scala:240) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221) at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:240) at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:97) at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:85) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) at scala.collection.immutable.Map$Map3.foreach(Map.scala:161) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) at kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:85) at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:96) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
Modify the/etc/hosts file
10.10.151.11 test0110.10.151.12 test0210.10.151.13 test03
Re-execution.../ bin/kafka-console-consumer.sh --bootstrap-server 10.10.151.12:6667 --topic flink_test
Successfully acquired data
cause analysis
1. It turns out that kafka consumers and kafka servers have the same hosts file in a cluster, so there is no problem.
2. The new kafka consumer is a new machine I added. It is independent of the cluster, but it needs to consume part of the data in the cluster for experimental use.
3. The hostname is used in the kafka configuration in the cluster, so the newly added machine cannot be connected without hosts.
4.kafka version is older is 1.0.0, but our project is finalized, this version will use version 1.0.0 to prevent conflicts, the next version I will upgrade it.
At this point, the study of "how to solve the problem that kafka consumes less than remote bootstrap-server data" is over, hoping to solve everyone's doubts. Theory and practice can better match to help everyone learn, go and try it! If you want to continue learning more relevant knowledge, please continue to pay attention to the website, Xiaobian will continue to strive to bring more practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.