In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Producer is thread-safe, multi-thread sharing of a producer is faster than using multiple producers overall
If you want to understand the learning route of big data, want to learn big data knowledge and need free learning materials, you can add group: 784789432. Welcome to join. Every day at 3 p.m. live broadcast to share basic knowledge, at 20:00 p.m. live broadcast to share big data project actual combat.
You can view the offset and lag of all consumers in a certain consumer group on the command line, that is, you can view the accumulation of data in Kafka. The following is from the official document.
Sometimes it's useful to see the position of your consumers. We have a tool that will show the position of all consumers in a consumer group as well as how far behind the end of the log they are. To run this tool on a consumer group named my-group consuming a topic named my-topic would look like this:
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
Note: This will only show information about consumers that use the Java consumer API (non-ZooKeeper-based consumers).
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
my-topic 0 2 4 2 consumer-1-029af89c-873c-4751-a720-cefd41a669d6 /127.0.0.1 consumer-1
my-topic 1 2 3 1 consumer-1-029af89c-873c-4751-a720-cefd41a669d6 /127.0.0.1 consumer-1
my-topic 2 2 3 1 consumer-2-42c1abd4-e3b2-425d-a8bb-e1ea49b29bb2 /127.0.0.1 consumer-2
This tool also works with ZooKeeper-based consumers:
bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --describe --group my-group
Note: This will only show information about consumers that use ZooKeeper (not those using the Java consumer API).
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID
my-topic 0 2 4 2 my-group_consumer-1
my-topic 1 2 3 1 my-group_consumer-1
my-topic 2 2 3 1 my-group_consumer-2
retention
Kafka keeps the data after consumption, but it does not keep it forever. By default, after 7 days, it will automatically delete the data. Of course, we can set retention for a few days, set by log.retention.ms, log.retention.minutes, log.rentention.hours in broker config (i.e. server.properties), with increasing priority. Default log. rendition.hours=168.
There is also a retention setting method, log.retention.bytes, which is also set in server.properties, which defines the maximum size of a partition storage. One of two ways to do it, and then you delete it.
Kafka deletion is based on segment deletion, and only one or more segments can be deleted at a time.
In addition, you can also set the retention of each topic. For details, see the official document http://www.example.com. kafka.apache.org/documentation/
See http://kafka.apache.org/documentation/#brokerconfigs for the full list of log.retention./ log.roll./ log.segment.* configs
auto.offset.reset
What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):
earliest: automatically reset the offset to the earliest offset
latest: automatically reset the offset to the latest offset
none: throw exception to the consumer if no previous offset is found for the consumer's group
anything else: throw exception to the consumer.
default: latest
Each topic record offset, offset belongs to a different group, a group has only one offset (each partition)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.