In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "the installation process of Kafka cluster". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "the installation process of Kafka cluster".
1. Download the official version of kafka
Http://kafka.apache.org/downloads.html
two。 Extract the tar package under Linux
Tar-xzfkafka_2.9.2-0.8.1.1.tgz
3. Modify server.properties
Go to the kafka root directory config/server.properties
The four main modified parameters are as follows:
The identity of broker.id=0 / / broker (positive number). Each broker.id in the cluster cannot be repeated.
Port=9092 / / port number, each port in a single node cannot be repeated (for convenience, it is best not to repeat different node ports).
Log.dir = / tmp / / the place where the log is stored (data file).
Zookeeper.connect= dn1:2181,dn2:2181,dn3:2181
4. Configure producer.properties
# list of brokers used for bootstrapping knowledge about the rest of the cluster
# format: host1:port1,host2:port2...
Metadata.broker.list=dn1:9092,dn2:9092,dn3:9092
5. Configure consumer.properties
# Zookeeper connection string
# comma separated host:port pairs, each corresponding to a zk
# server. E.g. "127.0.0.1purl 3000127.0.0.1purl 3001127.0.1purl 3002"
Zookeeper.connect=dn1:2181,dn2:2181,dn3:2181
6. Verify that the installation is successful
Start server
> bin/zookeeper-server-start.sh config/zookeeper.properties
> bin/kafka-server-start.sh config/server.properties
The first line is to start kafka to drive zookeeper itself, which can be ignored if the cluster has zookeeper.
Create a topic
> bin/kafka-topics.sh-zookeeper dn1:2181,dn2:2181,dn3:2181-topic test1-replication-factor 3-partitions 1-create
> bin/kafka-topics.sh-zookeeper dn1:2181,dn2:2181,dn3:2181-topic test1-describe
Send somemessages
> bin/kafka-console-producer.sh-broker-list dn1:9092,dn2:9092,dn3:9092-topic test1
Start a consumer
> bin/kafka-console-consumer.sh-zookeeper dn1:2181,dn2:2181,dn3:2181-from-beginning-topic test1
Kafka-console-producer.sh and kafka-console-cousumer.sh are just command-line tools provided by the system. The purpose of starting here is to test whether it can produce and consume normally, to verify the correctness of the process, and to develop its own producers and consumers in the actual development.
7. Distribute to kafka cluster nodes
Cp kafka installation file and fix the parameters in 3
Start the kafka service of each node
Nohup./kafka-server-start.sh config/server.properties > output2 > & 1 &
For the specific usage of Kafka, please refer to the official website.
Http://kafka.apache.org/documentation.html
8.Web UI installation
8.1 Kafka webconsole UI
Download: https://github.com/claudemamo/kafka-web-console
Compilation: currently only sbt is available. Command: sbt dist (zip package will be produced for easy deployment and startup)
Decompression
Unzip kafka-web-console-2.1.0-SNAPSHOT.zip
Cd kafka-web-console-2.1.0-SNAPSHOT/bin
Add a parameter when starting for the first time:
. / kafka-web-console-DapplyEvolutions.default=true
The default is port 9000. If you need to modify it, start as follows:
. / kafka-web-console-DapplyEvolutions.default=true-Dhttp.port=19000
Otherwise, an error will be reported:
[warn] play-Run with-DapplyEvolutions.default=true if you want to run them automatically (be careful)
Oops, cannot start the server.
@ 6k1jkg3be: Database 'default' needs evolution!
At play.api.db.evolutions.EvolutionsPlugin
Anonfun$onStart$1anonfun$apply$1.apply$mcV$sp (Evolutions.scala:484)
View help and background running:
. / kafka-web-console-h
Nohup. / kafka-web-console > / dev/null 2 > & 1 &
Web end address: http://localhost:9000/
Suggestion: this third-party WEBUI is intuitive and easy to use.
8.2KafkaOffsetMonitor UI
Download the package
Https://github.com/quantifind/KafkaOffsetMonitor/releases/tag/v0.2.0
Official website:
Http://quantifind.github.io/KafkaOffsetMonitor/
Compilation: jar package is provided, no need to compile it yourself
Run command
Java-cpKafkaOffsetMonitor-assembly-0.2.0.jar\
Com.quantifind.kafka.offsetapp.OffsetGetterWeb\
-- zk zk-01,zk-02\
-- port 8080\
-- refresh 5.minutes\
-- retain 1.day
Web end address: http://localhost:port/
8.3kafkaManager UI
Download https://github.com/yahoo/kafka-manager
The kafka management interface of Yahoo
Download the source code and run the command: sbtclean dist production zip package.
Decompression can be used.
Run the command:
. / kafka-manager-Dconfig.file=../conf/application.conf-Dhttp.port=8080
Web end address: http://localhost:port/
Kafka common commands
Create topic
/ kafka-topics.sh-- create-- zookeeper 192.168.153.128 zookeeper 2181-- replication-factor 1--partitions 1muri-topic test123
View topic information
. / kafka-topics.sh-- describe-- topic test123-- zookeeper 192.168.153.128purl 2181
Modify topic Partition
. / kafka-topics.sh-- alter-- topic test123-- partitions 2-- zookeeper 192.168.153.128purl 2181
Delete topic
. / kafka-run-class.shkafka.admin.DeleteTopicCommand-topic test1-zookeeper192.168.35.122:2181192.168.35.123:2181
Only the metadata in zookeeper has been deleted, and the data file needs to be deleted manually.
Thank you for reading, the above is the content of "the installation process of Kafka cluster". After the study of this article, I believe you have a deeper understanding of the installation process of Kafka cluster, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.