In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "kafka cluster installation and configuration method". The explanation content in this article is simple and clear, easy to learn and understand. Please follow the idea of Xiaobian and go deep into it slowly to study and learn "kafka cluster installation and configuration method" together!
1) Extract the installation package [mayi@mayi101 software]$ tar -zxvf kafka_2.11-2.4.1.tgz -C /opt/software/2) Modify the extracted file name [mayi@mayi101 module]$ mv kafka_2.11-2.4.1/ kafka3) Create the logs folder [mayi@mayi101 kafka]$ mkdir logs in/opt/module/kafka directory 4) Modify the configuration file [mayi@mayi101 kafka]$ cd config/[mayi@mayi101 config]$ vi server.properties Enter the following: #Broker's globally unique number, cannot be duplicated broker.id= 0#Delete topic feature enable delete.topic.enable=true#Number of threads processing network requests num.network.threads=3#Number of ready-made threads used to process disk IO num.io. threads=8#Buffer size for sending sockets socket.send.buffer.bytes=102400#Buffer size for receiving sockets socket.receive.buffer.bytes=102400#The buffer size of the socket request socket.request.max.bytes=104857600#kafka The path to log.dirs=/opt/software/kafka/logs#topic The number of partitions on the current broker num.partitions=1#The number of threads used to recover and clean data num.recovery.threads.per.data.dir= 1#The longest time for segment files to be retained, and the timeout will be deleted log.retention.hours=168#Configure the connection Zookeeper cluster address zookeeper.connect=mayi101:2181,mayi102:2181, mayi103:2181/kafka5) Configure environment variables [mayi@mayi101 module]$ sudo vi /etc/profile#KAFKA_HOMEexport KAFKA_HOME=/opt/module/kafkaexport PATH=$PATH:$KAFKA_HOME/bin[atguigu@hadoop102 module]$ source /etc/profile6) Distribute installation package [mayi@mayi101 module]$ xsync kafka/Note: After distribution, remember to configure environment variables of other machines on mayi102 and mayi103. Modify broker.id=1 and broker.id=2 in configuration file/opt/module/kafka/config/server.properties. Note: broker.id must not be repeated 7) Start the cluster on mayi101, mayi102, mayi103 nodes kafka[mayi@mayi101 kafka]$kafka-server-start.sh-daemon $KAFKA_HOME/config/server.properties [mayi@mayi101 kafka]$ kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties [mayi@mayi101 kafka]$ kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties8) Close Cluster [mayi@mayi101 kafka]$ bin/kafka-server-stop.sh[mayi@mayi102 kafka]$ bin/kafka-server-stop.sh[mayi@mayi103 kafka]$ bin/kafka-server-stop.sh9) kafka GroupStart Script #!/ bin/bashfor i in `cat /opt/software/hadoop-2.9.2/etc/hadoop/slaves`doecho "========== $i ==========" ssh $i 'source /home/mayi/.bash_profile&&/opt/software/kafka_2.11-2.4.1/bin/kafka-server-start.sh -daemon /opt/software/kafka_2.11-2.4.1/config/server.properties'echo $? done10) kafkaq group clearance script #!/ bin/bashfor i in `cat /opt/software/hadoop-2.9.2/etc/hadoop/slaves`doecho "========== $i ==========" ssh $i 'source /home/mayi/.bash_profile&&/opt/software/kafka_2.11-2.4.1/bin/kafka-server-stop.sh'echo $? done Thank you for reading, the above is the content of "kafka cluster installation and configuration method", after learning this article, I believe that everyone has a deeper understanding of kafka cluster installation and configuration method, and the specific use needs to be verified by practice. Here is, Xiaobian will push more articles related to knowledge points for everyone, welcome to pay attention!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.