In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article shows you how to read Kafka messages and its offsetRange in Spark Streaming job. The content is concise and easy to understand. It will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
When reading messages in Kafka topic (s) in Spark Streaming job, we sometimes need to synchronously record the offsetRange of each read messages. To achieve this, the following two pieces of code (code 1 and code 2) are correct and equivalent.
Code 1 (correct):
JavaPairInputDStream messages = KafkaUtils.createDirectStream (
Jssc
String.class
String.class
StringDecoder.class
StringDecoder.class
KafkaParams
TopicsSet
);
Messages.foreachRDD (
New Function () {
@ Override
Public Void call (JavaPairRDD rdd) throws Exception {
OffsetRange [] offsets = ((HasOffsetRanges) rdd.rdd ()) .offsetRanges ()
JavaRDD valueRDD = rdd.values ()
Long msgNum = processEachRDD (valueRDD, outputFolderPath, definedDuration)
If (msgNum > 0 & & zkPathRootworthy = null) {
WriteOffsetToZookeeper (zkClient, zkPathRoot, offsets)
}
Return null
}
});
Code 2 (correct):
JavaPairInputDStream messages = KafkaUtils.createDirectStream (
Jssc
String.class
String.class
StringDecoder.class
StringDecoder.class
KafkaParams
TopicsSet
);
Final AtomicReference offsetRanges=new AtomicReference ()
Lines = messages.transformToPair (new Function () {
@ Override
Public JavaPairRDD call (JavaPairRDD rdd) throws Exception {
OffsetRange [] offsets = ((HasOffsetRanges) rdd.rdd ()) .offsetRanges ()
OffsetRanges.set (offsets)
Return rdd
}
}) .map (new Function () {
@ Override
Public String call (Tuple2 tuple2) {
Return tuple2._2 ()
}
});
Lines.foreachRDD (new Function () {
@ Override
Public Void call (JavaRDD rdd) throws Exception {
Long msgNum = processEachRDD (rdd, outputFolderPath, definedDuration)
If (msgNum > 0 & & zkPathRootworthy = null) {
OffsetRange [] offsets = offsetRanges.get ()
WriteOffsetToZookeeper (zkClient, zkPathRoot, offsets)
}
Return null
}
});
Note, however, that the following two pieces of code (code 3 and code 4) are incorrect and both throw an exception:java.lang.ClassCastException: org.apache.spark.rdd.MapPartitionsRDD cannot be cast to org.apache.spark.streaming.kafka.HasOffsetRanges
Code 3 (error):
JavaPairInputDStream messages = KafkaUtils.createDirectStream (
Jssc
String.class
String.class
StringDecoder.class
StringDecoder.class
KafkaParams
TopicsSet
);
Messages.transform (new Function () {
@ Override
Public JavaRDD call (JavaPairRDD rdd) throws Exception {
Return rdd.values ()
}
}) .foreachRDD (new Function () {
@ Override
Public Void call (JavaRDD rdd) throws Exception {
Long msgNum = processEachRDD (rdd, outputFolderPath, definedDuration)
If (msgNum > 0 & & zkPathRootworthy = null) {
OffsetRange [] offsets = offsetRanges.get ()
WriteOffsetToZookeeper (zkClient, zkPathRoot, offsets)
}
Return null
}
});
Code 4 (error):
JavaPairInputDStream messages = KafkaUtils.createDirectStream (
Jssc
String.class
String.class
StringDecoder.class
StringDecoder.class
KafkaParams
TopicsSet
);
Messages.map (new Function () {
@ Override
Public String call (Tuple2 tuple2) {
Return tuple2._2 ()
}
}) .foreachRDD (new Function () {
@ Override
Public Void call (JavaRDD rdd) throws Exception {
Long msgNum = processEachRDD (rdd, outputFolderPath, definedDuration)
If (msgNum > 0 & & zkPathRootworthy = null) {
OffsetRange [] offsets = offsetRanges.get ()
WriteOffsetToZookeeper (zkClient, zkPathRoot, offsets)
}
Return null
}
});
The above is how to read Kafka messages and its offsetRange in Spark Streaming job. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.