In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
1. JDK installation
Refer to jdk installation, omitted here.
2. Install Zookeeper
Refer to the "Fully Distributed" section of my Zookeeper installation tutorial.
3. Install Kafka
Refer to the "Fully Distributed Build" section of my Kafka installation tutorial.
4. Install Flume
Refer to my Flume installation tutorial.
5. Configure Flume 5.1. Configure kafka-s.cfg
$ cd /software/flume/conf/ #Switch to kafka's profile directory
$ cp flume-conf.properties.template kafka-s.cfg #Copy flume's configuration file template to kafka-s.cfg
Details of kafka-s.cfg are as follows:
#Configure source, channel, sink of flume agent
a1.sources = r1
a1.channels = c1
a1.sinks=k1
#Configure source
a1.sources.r1.type = exec
a1.sources.r1.command=tail -F/tmp/logs/kafka.log
#Configure channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
#Configure sink
a1.sinks.k1.channel = c1
a1.sinks.k1.type =org.apache.flume.sink.kafka.KafkaSink
#Set Kafka's Topic
a1.sinks.k1.kafka.topic = mytest
#Set Kafka broker address and port number
a1.sinks.k1.kafka.bootstrap.servers = s201:9092,s202:9092,s203:9092
#Configure the number of batch submissions
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.ki.kafka.producer.compression.type= snappy
#bind source and sink to channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel=c1
Three things to note about the above configuration file:
a、a1.sources.r1.command=tail-F /tmp/logs/kafka.log
b、a1.sinks.k1.kafka.bootstrap.servers= s201:9092,s202:9092,s203:9092
c、a1.sinks.k1.topic= mytest
From the profile we can learn:
1) We need to create a kafka.log file under/tmp/logs and output the contents to the file (as will be discussed below);
2) Flume connects to kafka address is s201:9092,s202:9092,s203:9092, be careful not to configure the wrong;
3) Flume will output the collected content to Kafkatopic for mytest, so we need to open a terminal consumption topic mytest content after starting zk, kafka, so that you can see the play between flume and kafka.
5.2. The requested URL/tmp/logs/kafka.log was not found on this server.
Create an empty file kafka.log under/tmp/logs; if there is no logs directory under/tmp directory, you need to create logs directory first.
5.3. Create a shell script that generates log data
Create the kafkaoutput.sh script in the hadoop user directory and give it execute permissions to output to/tmp/logs/kafka.log.
The kafkaoutput.sh script reads as follows:
for((i=0;i>/tmp/logs/kafka.log;
done
5.4. Start Zookeeper
Start the zk service on the zk installed server with the following command:
$ zkServer.sh start
5.5. Start Kafka
Start the kafka cluster on each server to deploy kafka
$ kafka-server-start.sh/software/kafka/config/server.properties &
5.6. Creating a Kafka Topic
$ kafka-topics.sh --create --zookeeper s201:2181--replication-factor 3 --partitions 3 --topic mytest
5.7. Start Consumption Topic
$ kafka-console-consumer.sh--bootstrap-server s201:9092,s202:9092,s203:9092 --zookeepers201:2181,s202:2181,s203:2181 --topic mytest --from-beginning
5.8. Initiate Flume
$ flume-ng agent --conf/software/flume/conf/ --conf-file kafka-s.cfg --name a1-Dflume.root.logger=INFO,console
5.9. Execute kafkaoutput.sh script to generate log data
$ ./ kafkaoutput.sh
View log file contents as follows:
Consumption information viewed in kafka:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.