In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "some methods for setting the size of Kafka message body". In the operation process of actual cases, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
Weixin Official Accounts "Backend Advanced" focuses on back-end technology sharing: Java, Golang, WEB framework, distributed middleware, service governance, etc.
Remember a few days ago, a small partner told me that when sending a message, it prompted an exception that the request data was too large? After adjusting the size of max.request.size, the following exception was reported
After checking the relevant data, we found that the Broker side has a certain size limit for messages sent by Producer. This parameter is called message.max.bytes. This parameter determines the maximum message size that the Broker can receive. Its default value is 977 KB, and the value of max.request.size has been set to 2M, which is obviously much larger than message.max.bytes. Therefore, when the message is larger than 997KB, the exception above will be thrown.
It is worth mentioning that the topic configuration also has a parameter called max.message.bytes, which is only effective for a certain topic and can be dynamically configured to cover the global message.max.bytes. The advantage is that you can set the size of messages received by the Broker for different topics without restarting the Broker.
This is not the end. The size of the message data pulled by the consumer side also needs to be changed. This parameter is called fetch.max.bytes. This parameter determines the maximum number of bytes that the consumer can get from the Broker at a time. Then the problem comes. If the parameter value is smaller than max.request.size, it is likely that the consumer will not consume messages larger than fetch.max.bytes.
Therefore, in summary, it needs to be set up like this:
producer end: max.request.size=5242880(5M)broker:message.max.bytes=6291456(6M)consumer:fetch.max.bytes=7340032(7M)max.request.size < message.max.bytes < fetch.max.bytes
In addition, remember the function of batch.size parameter mentioned earlier? From the source code, it can be seen that the message sent by Producer is encapsulated into ProducerRecord every time, and then added to ProducerBatch by using message accumulator RecordAccumulator. Since a batch.size memory space needs to be allocated every time ProducerBatch is created, frequent creation and closure will lead to high performance overhead, so RecordAccumulator has a BufferPool inside, which realizes the multiplexing of caches. Only BufferByte of batch size is multiplexed. If ProducerBatch is larger than batch size, it will not be added to BufferPool and will not be multiplexed.
The question is: if max.request.size is greater than batch.size, will the message be sent to broker in multiple batches?
The answer is obviously no. According to the above, if a ProducerRecord is already larger than batch.size, then the ProducerBatch contains only one ProducerRecord, and the ProducerBatch is not added to BufferPool.
Therefore, during the tuning process of Kafka Producer, according to business requirements, special attention should be paid to the size value between batch.size and max.request.size to avoid frequent creation and closure of memory space.
"Kafka message body size setting some methods" content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.