In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
I. Environmental configuration
1. System environment
[root@date ~] # cat / etc/centos-releaseCentOS Linux release 7.4.1708 (Core)
2.. Install the JAVA environment
Yum-y install java java-1.8.0-openjdk-devel#jps requires jdk-devel support
3. Download kafka
[root@slave1] # ls kafka_2.11-1.0.0.tgzkafka_2.11-1.0.0.tgz
2. Configure zookeeper
1. Modify the configuration file of zookeeper in kafka
[root@slave1 ~] # cat / opt/kafka/config/zookeeper.properties | grep-v "^ $" | grep-v "^ #" tickTime=2000initLimit=10syncLimit=5maxClientCnxns=300dataDir=/opt/zookeeper/datadataLogDir=/opt/zookeeper/logclientPort=2181server.1=20.0.5.11:2888:3888server.2=20.0.5.12:2888:3888server.3=20.0.5.13:2888:3888
two。 Copy the configuration file to another node
[root@slave1 ~] # pscp.pssh-h zlist / opt/kafka/config/zookeeper.properties / opt/kafka/config/ [root@slave1 ~] # cat zlist20.0.5.1120.0.5.1220.0.5.13
3. Create data and log directories on each node
[root@slave1 ~] # pssh-h zlist 'mkdir / opt/zookeeper/data' [root@slave1 ~] # pssh-h zlist' mkdir / opt/zookeeper/log'
4. Create a myid file
[root@slave1 ~] # pssh-H slave1-I 'echo 1 > / opt/zookeeper/data/myid' [root@slave1 ~] # pssh-H slave2-I' echo 2 > / opt/zookeeper/data/myid' [root@slave1 ~] # pssh-H slave3-I 'echo 3 > / opt/zookeeper/data/myid'
5. Start zookeeper
[root@slave1 ~] # pssh-h zlist 'nohup / opt/kafka/bin/zookeeper-server-start.sh / opt/kafka/config/zookeeper.properties &' [root@slave1 ~] # pssh-h zlist-I 'jps' [1] 23:29:36 [SUCCESS] 20.0.5.123492 QuorumPeerMain3898 Jps [2] 23:29:36 [SUCCESS] 20.0.5.117369 QuorumPeerMain9884 Jps [3] 23:29:36 [SUCCESS] 20.0.5.133490 QuorumPeerMain3898 Jps
3. Configure kafka
1. Modify server.properties
[root@slave1 ~] # cat / opt/kafka/config/server.propertiesbroker.id=1# machine uniquely identifies host.name=20.0.5.11# current broker machine ipport=9092#broker listening port number of threads num.network.threads=3# server accepts requests and response requests number of num.io.threads=8# IO threads socket.send.buffer.bytes=102400# send buffer size The data is first stored in the buffer and reaches a certain size after sending socket.receive.buffer.bytes=102400# to receive the buffer size, and after reaching a certain size, serialize to the disk the maximum number of log.dirs=/opt/kafkalog# message storage directories that socket.request.max.bytes=104857600# requests for messages to kafka or send messages to kafka. Delete.topic.enable=true# can delete the default number of partitions in topicnum.partitions=1# by command. A topic defaults to 1 partition num.recovery.threads.per.data.dir=1# sets the number of threads to recover and clean timeout data offsets.topic.replication.factor=3# is used to configure the number of partition copies of topic recorded by offset message save time log.segment.bytes=1073741824# log file maximum, log file greater than the maximum value Then create a new log.retention.check.interval.ms=300000# log retention check interval zookeeper.connect=20.0.5.11:2181,20.0.5.12:2181,20.0.5.13:2181#zookeeper address zookeeper.connection.timeout.ms=6000# connection zookeeper timeout
two。 Copy broke configuration files to other service nodes (modify broker.id and host.name)
3. Start kafka broke
[root@slave1 ~] # pssh-h zlist'/ opt/kafka/bin/kafka-server-start.sh / opt/kafka/config/server.properties > / root/kafka.log 2 > 1 pssh'[1] 02:13:05 [SUCCESS] 20.0.5.11 [2] 02:13:05 [SUCCESS] 20.0.5.12 [3] 02:13:05 [SUCCESS] 20.0.5.13 [root@slave1 ~] # pssh-h zlist-I 'jps' [1] 02 : 14:51 [SUCCESS] 20.0.5.123492 QuorumPeerMain6740 Jps6414 Kafka [2] 02:14:51 [SUCCESS] 20.0.5.133490 QuorumPeerMain4972 Kafka5293 Jps [3] 02:14:51 [SUCCESS] 20.0.5.117369 QuorumPeerMain11534 Kafka11870 Jps
4. Create topic
[root@slave1] # / opt/kafka/bin/kafka-topics.sh-- create-- zookeeper 20.0.5.11 zookeeper 2181 20.0.5.12 create 2181-- replication-factor 3-- partitions 3-- topic test1Created topic "test1". [root @ slave1 ~] # / opt/kafka/bin/kafka-topics.sh-- describe-- zookeeper 20.0.5.11 lance 2181 20.0.5.12 Lv 2181 20.0.5.13: 2181-topic test1Topic:test1 PartitionCount:3 ReplicationFactor:3 Configs: Topic:test1 Partition: 0 Leader: 2 Replicas: 2 1 Isr: 2,3,1 Topic: test1 Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2 Topic: test1 Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
5. Start producer and consumer
[root@slave4] # / opt/kafka/bin/kafka-console-producer.sh-- broker-list 20.0.5.12 root@slave5 9092-- topic test1 [root@slave5] # / opt/kafka/bin/kafka-console-consumer.sh-- zookeeper 20.0.5.13 broker-list 2181-topic test1
6. Delete topic
[root@slave1] # / opt/kafka/bin/kafka-topics.sh-- delete-- zookeeper 20.0.5.11 zookeeper 2181-- topic test1Topic test1 is marked for deletion.Note: This will have no impact if delete.topic.enable is not set to true.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.