In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces how to deploy the Kafka cluster, which is very detailed and has a certain reference value. Friends who are interested must finish it!
Deployment of 1 Kafka Cluster
Since the Kafka cluster depends on the ZooKeeper cluster, we need to build the ZK cluster in advance (which can be separated from the kafka cluster). Here, kafka and zk are not installed on the same machine, but the zk in the hadoop cluster is used directly.
Server list kafka Cluster slave4,slave5,slave6zookeeper Cluster master1,master2,slave1
First of all, we will extract the downloaded Kafka installation package kafka_2.9.1 with the command as follows:
Extract Kafka to slave4
[hadoop@slave4~] $tar-zxvf kafka_2.9.1-0.8.2.1.tgz
Go to the Kafka decompression directory
[hadoop@slave4~] $cd kafka_2.9.1-0.8.2.1
Configure environment variables
[hadoop@slave4~] $vi / etc/profileexport KAFKA_HOME=/home/hadoop/kafka_2.9.1-0.8.2.1export PATH=$PATH:$KAFKA_HOME/bin
Configure zookeeper.properties for Kafka
# the directory where the snapshot is stored.dataDir=/home/hadoop/data/zk# the port at which the clients will connectclientPort=2181# disable the per-ip limit on the number of connections since this isa non-production configmaxClientCnxns=0
Configure server.properties
# The id of the broker. This must be set to a unique integer for each broker.broker.id=0# Zookeeper connection string# comma separated host:port pairs, each corresponding to a zkzookeeper.connect=master1:2181,master2:2181,slave1:2181
Note: when configuring broker here, the broker on each machine is guaranteed to be unique, starting at 0. For example, configure broker.id=1,broker.id=2 on the other two machines
Configure producer.properties
# list of brokers used for bootstrapping knowledge about the rest of the cluster# format: host1:port1,host2:port2... metadata.broker.list=slave4:9092,slave5:9092,slave6:9092
Configure consumer.properties
# Zookeeper connection string# comma separated host:port pairs, each corresponding to a zk# server. E.g. "127.0.0.1 3000127.0.0.1" zookeeper.connect=master1:2181,master2:2181,slave1:2181
At this point, the Kafka cluster deployment is complete.
2. Kafka cluster simple example test startup
First, before starting the Kafka cluster service, make sure our ZK cluster is started. Let's start the Kafka cluster service. The startup command is as follows:
[hadoop@slave4 kafka_2.11-0.8.2.1] $kafka-server-start.sh config/server.properties &
Note: the other 2 nodes start with reference to the above method.
In addition, when starting other nodes, the first start-up node will display the information records joined by other nodes.
Verify that the startup process [hadoop@slave4 kafka_2.11-0.8.2.1] $jps2049 QuorumPeerMain2184 Kafka2233 Jps creates a Topic
After the service starts, we start to create a Topic with the following command:
[hadoop@slave4] $kafka-topics.sh-zookeeper master1:2181,master2:2181,slave1:2181-topic test1-replication-factor 3-partitions 1-create
We can view the information about the Topic, and the command is as follows:
[hadoop@slave4] $kafka-topics.sh-- zookeeper master1:2181,master2:2181,slave1:2181-- topic test1-- describe
Let's explain these outputs. The first line is a description of all the partitions, and then each partition corresponds to a row, because we only have one partition, so only one line is added below.
Leader: responsible for reading and writing messages, Leader is randomly selected from all nodes.
Replicas: lists all replica nodes, regardless of whether they are in service or not.
Isr: is the node in service
Production message
Let's use the Producer of kafka to produce some messages, and then let the Consumer of Kafka consume it. The command is as follows:
[hadoop@slave4] $kafka-console-producer.sh-- broker-list slave4:9092,slave5:9092,slave6:9092-- topic test1
Consumption message
Next, we start the consumption process on another node to consume these messages, with the command as follows:
[hadoop@slave5] $kafka-console-consumer.sh-- zookeeper master1:2181,master2:2181,slave1:2181-- from-beginning-- topic test1
The consumption record is shown in the following figure:
3.HA characteristics
Here, from the screenshot information above, we can see that the Kafka service on the slave4 node is Lead. Let's first kill the Kafka service of the slave4 node:
[hadoop@slave4 config] $jps2049 QuorumPeerMain2375 Jps2184 Kafka [hadoop@slave4 config] $kill-9 2184
Then, the other nodes immediately elect a new Leader, as shown in the following figure:
Next, let's test the production and consumption of messages, as shown in the following figure:
Production message
Through the test, we can find that the HA feature of Kafka is good, and it has a good fault-tolerant mechanism.
These are all the contents of the article "how to deploy Kafka clusters". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.