In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Preface
Companies of almost any size generate a large amount of data all the time and collect business log data for use by offline and online analysis systems. Processing these logs requires specific logging systems, which in general require high availability, high reliability, and scalability.
Flume is a distributed, reliable and highly available system for massive log collection, aggregation and transmission. Support to customize all kinds of data senders in the system for data collection; at the same time, Flume provides the ability to simply process the data and write to various data recipients (customizable). The initial release of Flume is now collectively referred to as Flume OG (original generation), which belongs to Cloudera. However, with the expansion of Flume functions, the Flume OG code project is bloated, the core component design is unreasonable, the core configuration is not standard and other shortcomings are exposed, especially in the last release version 0.94.0 of Flume OG, the phenomenon of unstable log transmission is particularly serious. In order to solve these problems, Cloudera completed Flume-728 on October 22nd, and made landmark changes to Flume: refactoring core components, core configuration and code architecture, the refactoring version is collectively referred to as Flume NG (next generation); another reason for the change is that Flume is included in Apache, and Cloudera Flume is renamed Apache Flume.
Building a highly available and Extensible Mass Log Collection system with Flume
In chapter 1, Apache Hadoop. A basic introduction to Apache HBase. The purpose of this chapter is to introduce Hadoop. HBase and some of their internal details. If the reader is already familiar with Hadoop and HBase, you can skip this chapter.
Chapter 2 introduces the main components and configurations of Flume and explains how to deploy Flume to push data from the data generation server to the storage and indexing system.
Chapters 3, 4, 5, and 6 explain the different kinds of Source,Channel and Sink built into Flume and write custom plug-ins to customize the way Flume receives, modifies, formats, and writes data.
Chapter 7 discusses different ways to send data from your application to Flume Agent. The main purpose of this chapter is to write developers who push data to FlumeAgent applications.
Chapter 8 discusses how to design, deploy, and monitor Flume deployments.
Limited to the length of the platform, but also for everyone to better read, the editor packed all the materials related to Flume. Interested programmers can help forward the article and follow the private message reply [learning] to get it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.