In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains the "construction method of flume cluster". The explanation content in this article is simple and clear, easy to learn and understand. Please follow the ideas of Xiaobian and go deep into it slowly to study and learn the "construction method of flume cluster" together.
Flume actually does not have the concept of cluster, each flume is an independent individual, each flume agent collects data and aggregates it into the flume collector, which writes it into the flume storage.
I now have two virtual machines, one is called master, when flume agent; one is called slave1, when flume collector, implement agent connection collector to send logs to collector, and finally collector writes logs to hdfs.
The two virtual machines were preconfigured with jdk and hadoop.
1. Unpack and install
tar -zxvf apache-flume-1.6.0-bin.tar.gz
2. Configure the environment variable/etc/profile of each machine
3, Configure flume JAVA_HOME
cd /usr/local/apache-flume-1.6.0-bin/conf, rename flume-env.sh.template to flume-env.sh, and add export JAVA_HOME=/usr/lib/jdk1.7.0_75
4. Copy flume to other nodes (only their startup files are different)
cd /usr/local
scp -r apache-flume-1.6.0-bin slave1:~ Then move to the responses directory
5. Configure the agent startup file
On the master node, in the conf directory of flume, rename flume-conf.properties.template to flume-test.conf(actually, any name is fine, as long as you call the pair when you start), and then configure source, channel, sink.
Here agent source is spooldir, channel is memory, sink is avro, see flume official website for detailed description of three-layer categories
At slave1 node, collector has three layers, source is avro,channel is memory, sink is hdfs
6. Start flume
Start slave1 node first, then start master node
flume-ng agent -n agent -c /usr/local/apache-flume-1.6.0-bin/conf -f /usr/local/apache-flume-1.6.0-bin/conf/flume-test.conf -Dflume.root.logger=DEBUG,console
flume-ng agent -n agent -c /usr/local/apache-flume-1.6.0-bin/conf -f /usr/local/apache-flume-1.6.0-bin/conf/flume-test.conf -Dflume.root.logger=DEBUG,console
The startup command is the same, -n is the name, -c is the directory of the configuration file, -f is the configuration file, and-D is the log level.
Then add the file in the master's/home/zhanghuan/Documents/flume-test directory, and finally view the file in hdfs. If the file exists, it means that flume is successfully built, otherwise it fails.
Note that the following errors may occur during construction:
Could not configure sink sink1 due to: No channel configured for sink: sink1
org.apache.flume.conf.ConfigurationException: No channel configured for sink: sink1
A source can correspond to multiple channels, so channels, agent.sources.source1.channels = channel1,
But one sink corresponds to one channel, so it is channel, agent.sinks.sink1.channel = channel1, please note.
Thank you for reading, the above is the "flume cluster building method" content, after the study of this article, I believe that everyone has a deeper understanding of the flume cluster building method, the specific use of the situation also needs to be verified by practice. Here is, Xiaobian will push more articles related to knowledge points for everyone, welcome to pay attention!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.