In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "how to collect data by Source of Flume to output to the console through memory". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Demand:
Flume's Source collects data from NetCat and Exec and outputs it to the console through memory.
Here memory can also be file aggregation, checkpointDir record offset
# Use a channel which buffers events in memoryagent1.channels.channel1.type = fileagent1.channels.channel1.checkpointDir=/var/checkpoint agent1.channels.channel1.dataDirs=/var/tmpagent1.channels.channel1.capacity = 1000agent1.channels.channel1.transactionCapactiy = 100
Here is the memory configuration file:
A1.sources = R1 r2a1.sinks = k1a1.channels = c1a1.sources.r1.channels = c1a1.sources.r1.type = netcata1.sources.r1.bind = 0.0.0.0a1.sources.r1.port = 44444a1.sources.r2.channels = c1a1.sources.r2.type = execa1.sources.r2.command = tail-F / home/hadoop/data/data.log# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels. C1.capacity = 1000a1.channels.c1.transactionCapacity = 10 million Bind the source and sink to the channela1.sources.r1.channels = c1a1.sources.r2.channels = c1a1.sinks.k1.channel = C1
Test results:
[hadoop@hadoop001 ~] $telnet localhost 44444Trying:: 1...Connected to localhost.Escape character is'^] '.ZOURC123456789OK [hadoop@hadoop001 data] $echo 123 > > data.log [hadoop@hadoop001 data] $
Console output result:
2018-08-10 20 LoggerSink.java:94 12 SinkRunner-PollingRunner-DefaultSinkProcessor (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO-org.apache.flume.sink.LoggerSink.process (LoggerSink.java:94)] Event: {headers: {} body: 31 32 33 123} 2018-08-10 20 20 12 Event 32439 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO-org.apache.flume.sink.LoggerSink.process (LoggerSink.java:94)] Event: {headers: {} body: 5A 4F 55 52 43 31 32 33 34 35 36 37 38 39 0D ZOURC123456789. } "how the Source of Flume collects data to be output to the console through memory" is introduced here. Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.