In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article shares with you the content of a sample analysis of the Kafka configuration parameter Consumer. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
# # Consumer # #
# the configuration of the Consumer core is group.id and zookeeper.connect
# the only group ID,By setting the same group id multiple processes indicate that they are all part of the same consumer group that determines the ownership of the Consumer.
Group.id
# the consumer's ID, if not set, will increase itself
Consumer.id
# An ID for tracking and investigation, preferably the same as group.id
Client.id =
# for the designation of zookeeper clusters, you must use the same zk configuration as broker
Zookeeper.connect=debugo01:2182,debugo02:2182,debugo03:2182
# zookeeper's heartbeat timeout. Consumers who have checked this time are considered invalid.
Zookeeper.session.timeout.ms = 6000
# waiting time for zookeeper to connect
Zookeeper.connection.timeout.ms = 6000
# synchronization time between follower of zookeeper and leader
Zookeeper.sync.time.ms = 2000
# what to do when there is no initial offset in zookeeper, or when the upper limit of offset is exceeded.
# smallest: reset to minimum
# largest: reset to maximum
# anything else: throw an exception to consumer
Auto.offset.reset = largest
# the timeout of socket. The actual timeout is max.fetch.wait + socket.timeout.ms.
Socket.timeout.ms= 30 * 1000
# receive cache space size of socket
Socket.receive.buffer.bytes=64 * 1024
# message size limit for fetch from each partition
Fetch.message.max.bytes = 1024 * 1024
In # true, Consumer synchronizes offset to zookeeper after consuming messages, so that when Consumer fails, the new consumer can get the latest offset from zookeeper
Auto.commit.enable = true
# time interval for automatic submission
Auto.commit.interval.ms = 60 * 1000
# the maximum number of message blocks used for consumption, each of which can be equal to the value in fetch.message.max.bytes
Queued.max.message.chunks = 10
# when a new consumer is added to the group, reblance will be tried and the consumer side of the partitions will be migrated to the new consumer. This setting is the number of attempts.
Rebalance.max.retries = 4
# interval of each reblance
Rebalance.backoff.ms = 2000
# time of each re-election of leader
Refresh.leader.backoff.ms
The minimum data sent by # server to the consumer end. If this value is not met, it will wait until the specified size is met. The default is 1 to receive immediately.
Fetch.min.bytes = 1
# the maximum waiting time for a consumer request if the fetch.min.bytes is not met
Fetch.wait.max.ms = 100
# if no new messages are available for consumption within a specified period of time, an exception is thrown. The default of-1 means that there is no restriction.
Consumer.timeout.ms =-1
-- Kafka configuration parameters-- Consumer details--
Group.id default value: none
The only group that specifies the name of the consumer, and the process with the same group name belongs to the same consumer group.
Zookeeper.connect default value: none
The connect string of ZooKeeper is specified, and in the form of hostname:port, hostname and port are the hostname and port of each node of the ZooKeeper cluster. A node in the ZooKeeper cluster may die, so you can specify the connect string of multiple nodes. It is as follows:
Hostname1:port1,hostname2:port2,hostname3:port3.
ZooKeeper also allows you to specify a "chroot" path that allows Kafka clusters to store data that needs to be stored in ZooKeeper to a specified path, which allows multiple Kafka clusters or other applications to share the same ZooKeeper cluster. You can use the following connect string:
Hostname1:port1,hostname2:port2,hostname3:port3/chroot/path
Consumer.id default value: null
If it is not set, it is generated automatically.
Default value of socket.timeout.ms: 30 * 1000
The timeout of the socket request. The actual timeout is max.fetch.wait + socket.timeout.ms.
Default value of socket.receive.buffer.bytes: 64 * 1024
The byte size of the receiver buffer of the socket.
Fetch.message.max.bytes default value: 1024 * 1024
Each request to get a partition of a topic gets the maximum number of bytes, and each partition's data to be read is loaded into memory, so this can help control the memory used by the consumer. The setting of this value cannot be less than the maximum number of bytes of messages set on the server side, otherwise producer may send messages that are greater than the limit of bytes that consumer can get.
Auto.commit.enable default value: true
If set to true,consumer, the offset of the obtained message is sent to ZooKeeper periodically. When the consumer process dies, the committed offset can continue to be used, allowing the new consumer to continue to work.
Auto.commit.interval.ms default value: 60 * 1000
The interval between consumer sending offset to ZooKeeper.
Queued.max.message.chunks default value: 10
The maximum number of chunk cached for consuming messages, with each chunk reaching a maximum of fetch.message.max.bytes.
Rebalance.max.retries default value: 4
When a new consumer joins a consumer group, there is a rebalance operation, resulting in the redistribution of the relationship between each consumer and partition. If this redistribution fails, a retry will be made, and this configuration represents the maximum number of retries.
Fetch.min.bytes default value: 1
A fetch request should return at least how many bytes of data, and if the amount of data is less than this configuration, it will wait until there is enough data.
Fetch.wait.max.ms default value: 100
Before the server responds to the fetch request, if the message is insufficient, that is, the maximum blocking time of the server is less than the fetch.min.bytes. If it times out, the message is immediately sent to consumer.
Rebalance.backoff.ms default value: 2000
The backoff time when rebalance retries.
Refresh.leader.backoff.ms default value: 200
The backoff time to wait before the leader is selected after consumer discovers that the leader of a partition has been lost.
Auto.offset.reset default value: largest
What to do when Consumer finds that there is no initial offset in ZooKeeper or that offset is out of range:
* smallest: automatically sets offset to the minimum offset.
* largest: automatically sets offset to the largest offset.
* anything else: throw an exception.
Consumer.timeout.ms default value:-1
If no available messages are found to be consumable after the specified time interval, a timeout exception is thrown.
Client.id default value: group id value
A user-defined client id in each request can help track the call.
Zookeeper.session.timeout.ms default value: 6000
The timeout of ZooKeeper's session, if the ZK's heartbeat is not received during this period, the Kafka server will be considered dead. Setting this value too low may be mistaken for hanging, if set too high, if it does, it will take a long time to be known by server.
Zookeeper.connection.timeout.ms default value: 6000
The timeout for client to connect to ZK server.
Zookeeper.sync.time.ms default value: 2000
How long can a ZK follower lag behind leader?
Thank you for reading! This is the end of this article on "sample Analysis of Kafka configuration parameters Consumer". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.