In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to carry out Kafka 1.0.0d code sample analysis, the content of the article is of high quality, so the editor will share it with you for reference. I hope you will have some understanding of the relevant knowledge after reading this article.
Package kafka.demo;import java.util.HashMap;import java.util.Map;import org.apache.kafka.clients.producer.KafkaProducer;import org.apache.kafka.clients.producer.ProducerRecord;/** * *
Description: kafka 1.0.0
* @ author guangshihao * @ date September 19, 2018 * * / public class KafkaProduderDemo {public static void main (String [] args) {Map props = new HashMap () / * * acks, which sets whether to send data requires feedback from the server. There are three values of 0meme 1meme 1 * 0, which means that producer will never wait for an ack from broker. This is the behavior of version 0.7. * this option provides the lowest latency, but the guarantee of persistence is the weakest, and some data will be lost when the server hangs. * 1, which means that after leader replica has received the data, producer will get an ack. * this option provides better persistence because client will not return until the server acknowledges that the request has been successfully processed. * if you just write it to leader and hang up before you can copy the leader, then the message may be lost. *-1, which means that producer does not get an ack until all ISR have received the data. * this option provides the best persistence. As long as there is one replica alive, the data will not be lost * / props.put ("acks", "1"); / / configure the default partition method props.put ("partitioner.class", "org.apache.kafka.clients.producer.internals.DefaultPartitioner") / / configure topic's serialization class props.put ("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); / / configure value's serialization class props.put ("value.serializer", "org.apache.kafka.common.serialization.StringSerializer") / * * the host corresponding to kafka broker, in the format of host1:port1,host2:port2 * / props.put ("bootstrap.servers", "bigdata01:9092,bigdata02:9092,bigdata03:9092"); / / topic String topic = "test7"; KafkaProducer
< String, String>Producer = new KafkaProducer
< String, String>(props); for (int I = 1; I 5) {consumer.wakeup ();} catch (WakeupException e) {e.printStackTrace ();} finally {consumer.close ();}} on how to conduct Kafka 1.0.0 d code sample analysis is shared here. I hope the above content can be helpful to you and learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.