In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Kafka cluster deployment and verification 1. Download kafkawget https://www.apache.org/dyn/closer.cgi?path=/kafka/2.3.0/kafka_2.12-2.3.0.tgz II, zookeeper2.1, and configure zookeepertar-zxvf kafka_2.12-2.3.0.tgzcd / opt/kafka_2.12-2.3.0/config [miaocunfa@db1 config] $cat zookeeper.properties | grep-v ^ # | grep-v ^ $# tickTime: basic time unit of heartbeat, millisecond Basically all the time in ZK is an integral multiple of this time. The number of # initLimit:tickTime indicates the time it takes for followers to synchronize with leader after the leader election. If there are too many followers or too much leader data, the synchronization time may increase accordingly, so this value needs to be increased accordingly. Of course, this value is also the maximum waiting time (setSoTimeout) # syncLimit:tickTime for follower and observer to start synchronizing leader data, which is easy to be confused with the above time. It also indicates the maximum waiting time for follower and observer to interact with leader, but the timeout for normal request forwarding or ping message interaction after synchronization with leader. TickTime=2000initLimit=10syncLimit=5# memory database snapshot storage address. If no transaction log storage address (dataLogDir) is specified, it is also stored in this path by default. It is recommended that the two addresses be stored separately on different devices. DataDir=/ahdata/kafka-tmp/zookeeper# configures ZK to listen for client connections. The default value of port clientPort=2181# is 10. The maximum number of connections a client can connect to the same server is distinguished according to IP. If set to 0, there are no restrictions. This value is set on the one hand to prevent DoS***. MaxClientCnxns=0# server.serverid=host:tickpot:electionport# server: fixed # serverid: the specified ID for each server (must be between 1-255mm) Must not be repeated on each machine) # host: hostname # tickpot: heartbeat communication port # electionport: election port server.1=172.19.26.3:2888:3888server.2=172.19.26.6:2888:3888server.3=172.19.26.4:2888:38882.2 each node creates a server-id# myid file created in the dataDir directory # myid content is consistent with serverid in the configuration document echo 1 > / ahdata/kafka-tmp/zookeeper/ Myidecho 2 > / ahdata/kafka-tmp/zookeeper/myidecho 2 > / ahdata/kafka-tmp/zookeeper/myid2.3, Start zookeeper/opt/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh-daemon config/zookeeper.properties/opt/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh-daemon config/zookeeper.properties/opt/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh-daemon config/zookeeper.properties2.4, Verify zookeeper status [root@db1 config] # echo stat | nc 172.19.26.3 2181 | grep ModeMode: follower [root@db1 config] # echo stat | nc 172.19.26.4 2181 | grep ModeMode: follower [root@db1 config] # echo stat | nc 172.19.26.6 2181 | grep ModeMode: leader III, kafka3.1, configure kafka# the unique ID of the current machine in the cluster Like the myid nature of zookeeper, broker.id=1# is called a listener, which actually tells external connectors what protocol to use to access a Kafka service with a specified hostname and port. There is an extra advertised for listeners=PLAINTEXT://172.19.26.3:9092# listeners. The meaning of Advertised means declared and published, that is to say, this group of listeners are used by Broker for public release. The maximum number of threads for advertised.listeners=PLAINTEXT://172.19.26.3:9092# broker to process messages. In general, there is no need to modify the number of threads for num.network.threads=3# broker to process disk IO. The value should be greater than the number of your hard disk, num.io.threads=8# send buffer buffer size. The data is not sent all at once. It is stored in the buffer first and then sent after reaching a certain size. Can improve performance socket.send.buffer.bytes=102400# kafka receive buffer size. When the data reaches a certain size, it is serialized to disk socket.receive.buffer.bytes=102400#. This parameter is the maximum number of requests to request messages to kafka or send messages to kafka. This value cannot exceed the stack size of java. If socket.request.max.bytes=104857600# is configured with multiple directories, the newly created topic persists messages in In the current comma-divided directory, the number of partitions is at least the number of log.dirs=/ahdata/kafka-tmp/kafka-logs# partitions, a topic 3 partitions num.partitions=3# the number of threads used for log recovery in each data directory num.recovery.threads.per.data.dir=1# cluster highly available parameters, it is recommended to use a value greater than 1 to ensure availability, such as 3. The maximum persistence time of offsets.topic.replication.factor=3transaction.state.log.replication.factor=3transaction.state.log.min.isr=3# default messages is 168hrs, 7 days log.retention.hours=168# this parameter is: because kafka messages are appended to the file, when this value is exceeded Kafka will create a new file log.segment.bytes=1073741824# every 300000 milliseconds to check the above configured log failure time log.retention.check.interval.ms=300000# zookeeper connection port zookeeper.connect=172.19.26.3:2181172.19.26.4:2181172.19.26.6:2181# zookeeper connection timeout zookeeper.connection.timeout.ms=6000# client consumer re-election delay The default 0group.initial.rebalance.delay.ms=0# allows you to delete topicdelete.topic.enable=true3.2, launch kafka/opt/kafka_2.12-2.3.0/bin/kafka-server-start.sh-daemon config/server.properties/opt/kafka_2.12-2.3.0/bin/kafka-server-start.sh-daemon config/server.properties/opt/kafka_2.12-2.3.0/bin/kafka-server-start.sh-daemon config/server.properties3.3, Verify kafka status [root@db1 kafka_2.12-2.3.0] # echo dump | nc 172.19.26.3 2181 | grep broker / brokers/ids/1 / brokers/ids/2 [root@db1 kafka_2.12-2.3.0] # echo dump | nc 172.19.26.4 2181 | grep broker / brokers/ids/1 / brokers/ids/2 [root@db1 kafka_2.12-2.3.0] # echo dump | | nc 172.19.26.6 2181 | grep broker / brokers/ids/1 / brokers/ids/2 IV. | Verify cluster 4.1and topic# create a topic [miaocunfa@db1 kafka_2.12-2.3.0] $bin/kafka-topics.sh-- create-- zookeeper 172.19.26.3Vlue 2181172.19.26.4Vera 2181172.19.26.6Rich 2181-- replication-factor 2-- partitions 3-- topic demo_topicsWARNING: Due to limitations in metric names Topics with a period (.') Or underscore ('_') could collide. To avoid issues it is best to use either But not both.Created topic demo_topics.# lists all topic [miaocunfa@db1 kafka_2.12-2.3.0] $/ opt/kafka_2.12-2.3.0/bin/kafka-topics.sh-- list-- zookeeper 172.19.26.3:2181172.19.26.4:2181172.19.26.6:2181demo_topics# to view topic details [miaocunfa@db1 config] $/ opt/kafka_2.12-2.3.0 / Bin/kafka-topics.sh-- describe-- zookeeper 172.19.26.3 topic demo_topicsTopic:demo_topics PartitionCount:3 ReplicationFactor:2 Configs: Topic:demo_topics Partition: 0 Leader: 2 Replicas: 1 Isr: 2 Topic:demo_topics Partition: 1 Leader: 2 Replicas: 2 Isr: 2 Isr: 2 Topic:demo_topics Partition: 2 Leader: 2 Replicas: 1 Isr: 2 Isr Production and consumption verification
Ps. 1) if the producer and consumer windows open at the same time, enter the information in producer, and consumer will immediately consume the information and print it on the terminal.
2) A new terminal is opened to consume the same topic, and messages that have just been consumed will continue to be consumed by the new terminal. In other words, the message is not deleted immediately after it is consumed.
4.2.1, producer sends message [miaocunfa@db1 kafka_2.12-2.3.0] $/ opt/kafka_2.12-2.3.0/bin/kafka-console-producer.sh-- broker-list 172.19.26.3 2.3.0/bin/kafka-console-producer.sh 9092172.19.26.4 broker-list 9092172.19.26.6 2.3.0/bin/kafka-console-producer.sh 9092-- topic demo_topics > Hello Kafka! > 4.2.2, consumer receiving message # starts a new terminal to create a consumer receiving message. [root@db1 kafka_2.12-2.3.0] # / opt/kafka_2.12-2.3.0/bin/kafka-console-consumer.sh-- bootstrap-server=172.19.26.3:9092172.19.26.4:9092172.19.26.6:9092-- topic demo_topics-- from-beginningHello Kafkaqing 4.3, Delete delete.topic.enable=true from the test topic# configuration file to delete topic [miaocunfa@db1 config] $/ opt/kafka_2.12-2.3.0/bin/kafka-topics.sh-- delete-- zookeeper 172.19.26.3 topic# 2181172.19.26.4 topic# 2181172.19.26.6 topic 2181-- topic demo_topicsTopic demo_topics is marked for deletion.Note: This will have no impact if delete.topic.enable is not set to true.
Reference:
1. Https://www.cnblogs.com/qingyunzong/p/8619184.html
2. Https://www.cnblogs.com/qingyunzong/p/9005062.html#_label3_5
3. Https://www.cnblogs.com/cici20166/p/9426417.html
4. Https://www.orchome.com/805
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.