In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Zookeeper cluster building
The kafka cluster saves the state in the zookeeper. The first step is to set up the zookeeper cluster.
1. Install jdkwget http://xxxxx.oss-cn-xxxx.aliyuncs.com/xxxx/jdk-8u171-linux-x64.rpmyum localinstall jdk-8u171-linux-x64.rpm-y2 and download kafka installation package wget http://xxx-xx.oss-cn-xxx.aliyuncs.com/xxx/kafka_2.12-1.1.0.tgz official website download link: http://kafka.apache.org/downloads
Decompress kafka
Tar-zxvf kafka_2.12-1.1.0.tgz
Mv kafka_2.12-1.1.0 kafka
3. Configure zk cluster and modify zookeeper.properties file
Directly use zookeeper that comes with kafka to set up zk cluster
Cd / data/kafkavim conf/zookeeper.properties
# tickTime: this time is used as the interval between Zookeeper servers or between clients and servers to maintain a heartbeat, that is, a heartbeat is sent at each tickTime time. # initLimit: this configuration item is used to configure the Zookeeper acceptance client (the client is not the client that the user connects to the Zookeeper server, but the Follower server in the Zookeeper server cluster that connects to the Leader) the maximum number of heartbeat intervals that can be tolerated when initializing the connection. When the Zookeeper server has not received a return message from the client after the length of more than 5 heartbeats (that is, tickTime), the client connection failed. The total time length is 5 "2000" 10 seconds # syncLimit: this configuration item identifies the length of time for sending messages, requests and replies between Leader and Follower, which cannot exceed the maximum number of tickTime. The total time length is 5 "2000" 10 seconds # dataDir: snapshot log storage path # dataLogDir: the storage path of the transaction log needs to be created manually. If this is not configured, the transaction log will be stored in the directory established by dataDir by default. This will seriously affect the performance of zk. When the zk throughput is large, too many transaction logs and snapshot logs will be generated # clientPort: this port is the port on which the client connects to the Zookeeper server. Zookeeper will listen on this port and accept access requests from the client. Create a myid file
Go to the dataDir directory and write the myid files on the three servers to 1, 2, 3, respectively.
Myid is the identity that the zk cluster uses to discover each other, must be created, and cannot be the same.
Echo "1" > / data/kafka/zk/myid
Echo "2" > / data/kafka/zk/myid
Echo "3" > / data/kafka/zk/myid
Attention item
Zookeeper does not actively clean up old snapshots and log files and needs to be cleaned on a regular basis.
#! / bin/bash # snapshot file dir dataDir=/data/kafka/zk/version-2#tran log dir dataLogDir=/data/kafka/log/zk/version-2#Leave 66 files count=66 count=$ [$count+1] ls-t $dataLogDir/log.* | tail-n + $count | xargs rm-f ls-t $dataDir/snapshot.* | tail-n + $count | xargs rm-f # this script defines deleting files in the corresponding two directories and keeping the latest 66 files, which can be written to crontab Just set it to execute once at 2: 00 a. M. every day. 4. Start the zk service
Go to the kafka directory and execute the zookeeper command
Cd / data/kafkanohup. / bin/zookeeper-server-start.sh config/zookeeper.properties > logs/zookeeper.log 2 > & 1 &
No error was reported, and the fact that jps sees the zk process indicates that the startup was successful.
Kafka cluster building 1. Modify server.properties configuration file
Vim conf/server.properties
Meaning of some parameters:
Set it in each host,listeners first, otherwise the consumption message will report an error later. Each broker.id can not be the same num.network.threads is set to the number of cpu cores and the number of num.partitions partitions is set depending on the situation. There are the number of copies of messages saved by setting default.replication.factor kafka. If one copy is invalid, the other can continue to provide services. 2, start kafka cluster nohup. / bin/kafka-server-start.sh config/server.properties > logs/kafka.log 2 > & 1 &
Perform jps check
3. Create topic authentication. / bin/kafka-topics.sh-- create-- zookeeper kafka1:2181,kafka2:2181,kafka3:2181-- replication-factor 2-- partitions 1 color-topic test1--replication-factor 2 # copy two copies-- partitions 1 # create a partition-- topic # theme for test14, create producer and consumer # simulate client to send messages The producer. / bin/kafka-console-producer.sh-- broker-list kafka1:9092,kafka2:9092,kafka3:9092-- topic test1# simulates the client to receive the message, and the consumer. / bin/kafka-console-consumer.sh-- zookeeper kafka1:2181,kafka2:2181,kafka3:2181-- from-beginning-- topic test1# then enters anything at the producer and views the content on the consumer side. 5. Other commands. / bin/kafka-topics.sh-- list-- zookeeper xxxx:2181# shows all created topic./bin/kafka-topics.sh-- describe-- zookeeper xxxx:2181-- topic test1#Topic:ssports PartitionCount:1 ReplicationFactor:2 Configs:# Topic: test1 Partition: 0 Leader: 1 Replicas: 0Replicas 1 Isr: replication 1 his test1 partition is 0 # Replicas: 0 Replicas: 0 test1 16, delete topic
Modify the configuration file server.properties to add the following configuration:
Delete.topic.enable=true
Restart kafka and zookeeper after configuration.
Delete topc and related data directories if you do not want to modify the configuration file
# delete kafka topic./bin/kafka-topics.sh-- delete-- zookeeper xxxx:2181,xxxx:2181-- topic test1# delete kafka-related data directory rm-rf / data/kafka/log/kafka/test*# delete zookeeper-related path rm-rf / data/kafka/zk/test*rm-rf / data/kafka/log/zk/test*
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.