In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
This exception occurred when I submitted the cluster on the storm task today: storm.kafka.UpdateOffsetException
Java.lang.RuntimeException: storm.kafka.UpdateOffsetException at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor (DisruptorQueue.java:135) at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable (DisruptorQueue.java:106) at backtype.storm.disruptor$consume_batch_when_available.invoke (disruptor.clj:80) at backtype.storm.daemon.executor$fn__5694 $fn__5707 $fn__5758.invoke (executor.clj:819) at backtype.storm.util$async_loop$fn__545.invoke (util.clj:479) at clojure.lang.AFn.run (AFn.java:22) at java.lang.Thread.run (Thread.java:745) Caused by: storm.kafka.UpdateOffsetException at storm.kafka.KafkaUtils.fetchMessages (KafkaUtils.java:186) at storm.kafka.trident.TridentKafkaEmitter.fetchMessages (TridentKafkaEmitter.java:132) at storm.kafka.trident.TridentKafkaEmitter.doEmitNewPartitionBatch (TridentKafkaEmitter.java:113) at storm.kafka.trident.TridentKafkaEmitter.failFastEmitNewPartitionBatch (TridentKafkaEmitter.java:72) at storm.kafka.trident.TridentKafkaEmitter.emitNewPartitionBatch (TridentKafkaEmitter.java:79) at storm.kafka.trident. TridentKafkaEmitter.access$000 (TridentKafkaEmitter.java:46) at storm.kafka.trident.TridentKafkaEmitter$1.emitPartitionBatch (TridentKafkaEmitter.java:204) at storm.kafka.trident.TridentKafkaEmitter$1.emitPartitionBatch (TridentKafkaEmitter.java:194) at storm.trident.spout.OpaquePartitionedTridentSpoutExecutor$Emitter.emitBatch (OpaquePartitionedTridentSpoutExecutor.java:127) at storm.trident.spout.TridentSpoutExecutor.execute (TridentSpoutExecutor.java:82) at storm.trident.topology.TridentBoltExecutor.execute (TridentBoltExecutor.java:370) at backtype.storm.daemon.executor$fn__5694 $tuple_action_fn__5696.invoke (executor.clj: 690) at backtype.storm.daemon.executor$mk_task_receiver$fn__5615.invoke (executor.clj:436) at backtype.storm.disruptor$clojure_handler$reify__5189.onEvent (disruptor.clj:58) at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor (DisruptorQueue.java:132)... 6 more
This means that the message to be retrieved is no longer available, and may have something to do with the configuration of Kafka. For example, if the message expires and is deleted, or if the total number of messages in this topic is so large that some messages are deleted, it will be read from userStartOffsetTimeIfOffsetOutOfRange.
Because the corresponding file directory already exists under zookeeper
Solution: start zkCli.sh and enter transactional to delete the folder corresponding to the Stream file name.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.