In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "what are the problems related to Kafka Producer". In the daily operation, I believe many people have doubts about the problems related to Kafka Producer. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful for you to answer the doubts about "what are the problems related to Kafka Producer?" Next, please follow the editor to study!
Producer correlation
1: how should I set: metadata.broker.list?
Producre will get the Metadata he wants through metadata.broker.list. Once metada is successfully obtained, it will be produced.
The user will directly launch the request of the Produce to the Broker that holds the relative topic/partition. Use on Zookeeper
Ip/port to register the Broker, any Broker can Serve the metadata request, Client must ensure
There is at least one Broker that can provide services in metadata.broker.list, and there is a party in a load balancer.
To achieve these, that is through VIP (currently do not know what is VIP)
2: why does Producer get QueueFullException when running in async mode?
This phenomenon typically occurs when Producer sends messages much faster than Broker can handle, if
Your log can not be allowed to be Block, then the only way is to have to add enough new Broker to make them and the previous
Broker cooperates with you. If your log data is allowed to Block, you can set it as follows:
Queue.enqueueTimeout.ms:-1
In this way, once our queue is full, the producer will Block the data rather than discard it directly.
3: when I use Producer version 0.7 of Zk_based, I just see data consumed on some Broker, not all?
This problem is mainly related to the kafka0.7 series: http://apache.markmail.org/thread/c7tdalfketpusqkg
To put it simply, for a new Topic, Producer will use all the existing Brokers, however, if Topic already exists on top of other Brokers, and you add a new Broker at this time, these Producer will not see these newly added Producer. An alternative is to manually create a log directory on the newly added Broker for these Topic.
4: why didn't our Brokers receive data from Producer production after changing the compression level?
This happens when I try to turn on Gzip compression by setting compression.codec to 1, and with the change of Codec, you may find that even after a second of data transmission, you still haven't received this piece of data, and there are no logging errors anywhere, and then by adding log4j.properties to my Producer's classPath and setting the log recording level to DEBUG mode. I found that there was an org/xerial/snappy/SnappyInputStream NotClass error on the Producer side, which disappeared after adding Snappy jar
5: can we delete a Topic in kafka?
So far, kafka Version0.8 cannot be deleted directly. If you want to delete topic, you need to delete a series of data stored in the delete kafka, and delete the status and data that remain on the Zookeeper.
Consumers correlation
1: why do our consumers never get the data?
By default, if a consumer is [starting to consume for the first time in history], then it will ignore all the existing data in the entire Topic, and he will only consume the latest data after the launch of the current Consumer. Therefore, try to send more data after startup, or you can set: auto.offset.reset to "smallest".
2: why do consumers get invalidMessageSizeException when they take the number?
Usually stay, which means that the size of the consumer's "Fetch.Size" is not enough, and every time the consumer extracts data from the Broker
During the process, a configured upper limit of data will be read, if this size is larger than the size of our kafka single log.
Small, an invalidMessageSizeException is thrown. To solve this problem, you need to set the property:
Fetch.message.max.bytes
Fetch.size
The default size of Fetch.Size is 300000.
3: do I need to choose more than one group ID for my current consumer, or just one?
If all consumers use the same group id, the information in the topic is distributed to those consumers. In other words, every consumer
You will get a non-overlapping subset of the message. Having more consumers in the same group increases parallelism and the overall throughput of consumption. On the other hand, if each
Consumers have their own group ID, so each consumer will get a complete copy of all the messages.
4: why do some consumers in a consumer group never get the data?
From the current point of view, relative to a Consumer in ConusmerGroup, a partition of Topic [partition]
Is the smallest unit, so if the number of Consumer in a Consumer group is greater than the number of Partition, there will be some
Consumer is free to get no data. To solve this problem, you can increase the number of your Partitions.
5: why are there so many reblance in our consumer log?
A typical reason for [excessive balance adjustment] is the consumer-side GC. If so, you will see that the Zookeeper session expires in the consumer log (expired grep). Occasional Rebalnce is necessary, but if there are too many times, it will reduce the number of times we consume Consumer. At this time, Java GC needs us to adjust.
6: can we predict the results of kafka consumer rebalance?
7: my own Conusmer seems to have stopped. Why?
8: why is my data delayed in the process of consumption?
9: how to improve the throughput of kafka remote Consumer?
10: how to reset this offset in the process of consumption
11: what if I don't want kafka to manage the offset of consumption itself, but I want to manage it myself?
What is the relationship between 12:Fetch.wait.max.ms and Socket.timeout.ms?
13: how can I get an accurate message from kafka?
14: how to accurately get the offset of a specified message through OffsetFetchRequest and TimeStamp
Brokes correlation
How does 1:kafka rely on Zookeeper?
2: why did the control shutdown fail?
3: why can't my Consumer/Producer connect to Broker?
4: why would our Partiton Leaders migrate themselves?
5: how many topic can we have?
6: for a Topic, how should we choose the number of partitions?
7: Why do I see lots of Leader not local exceptions on the broker during controlled shutdown?
-Chinese is not easy to translate.
8: how to reduce Churns in the ISR phase? When will a Broker leave the ISR phase?
9: whenever Bouncing a Borker, why do we encounter: LeaderNotAvaible or NotLeaderFor Excetpion when starting?
10: can we dynamically add a new Broker to the cluster?
At this point, the study of "what are the problems related to Kafka Producer" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.