In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Fluentd is a log collection system, its characteristic is that its parts are customizable, you can collect logs to different places through simple configuration.
The purpose of this article is to introduce MongoDB support that is already built into the latest version of Fluentd. It is mainly illustrated by an example of collecting Apache/nginx logs:
Mechanism diagram
Install Fluentd
Set up the yum source
Vi / etc/yum.repos.d/td.repo
[treasuredata]
Name=TreasureData
Baseurl= http://packages.treasure-data.com/redhat/x86_64/
Gpgcheck=1
Gpgkey= http://packages.treasure-data.com/redhat/RPM-GPG-KEY-td-agent
Yum clean all
Yum makecache
Yum-y install td-agent
The MongoDB plug-in is already included in Fluentd's latest installation package, so there is no need to install the mongo plug-in
Configuration
If you installed Fluentd using the deb/rpm package above, the configuration file is located at: / etc/td-agent/td-agent.conf, otherwise it should be at: / etc/fluentd/fluentd.conf
First, we edit the source in the configuration file to set the log source
Type tail
Format apache
Pos_file / var/log/td-agent/nginx-access.log.pos
Path / usr/local/nginx/logs/www.access.log
Tag mongo.apache
Where:
① type tail: tail is a built-in input method for Fluentd, which works by constantly fetching new logs from the source file.
② format apache: specifies the use of Fluentd's built-in Apache log parser.
③ path / var/log/apache2/access_log: specify the log file location.
④ tag mongo.apache: specifies that tag,tag is used to classify different logs
Next, edit the output configuration, configure the log collection and store it in MongoDB.
The match tag can be followed by a regular expression to match the tag we specified. Only the logs corresponding to the matching tag will apply the configuration inside. Other items in the configuration are better than
# plugin type
Type mongo
# mongodb db + collection
Database apache
Collection access
# mongodb host + port
Host 192.168.30.113
Port 3306
# interval
Flush_interval 10s
For better understanding, just look at the comments, where flush_interval is used to control how often the log is written to MongoDB.
Note: if you are collecting nginx logs, the log format should remain the default.
Start td-agent
Service td-agent start
Then we can see the collected logs in MongoDB.
/ usr/local/mongodb/bin/mongo 192.168.30.113:3306
MongoDB shell version: 2.0.4
Connecting to: 192.168.30.113:3306/test
> use apache
Switched to db apache
> db.access.find ()
{"_ id": ObjectId ("530fee3753357d2437000001"), "host": "192.168.30.1", "user": "-", "method": "GET", "path": "/ api?callback=jQuery172014558692439459264_1393552941396&do=show_workspace&MEMBER_ID=80&os=w&webtoken=1b43342c1f&_=1393552941501", "code": "301"," size ":" 178", "referer": "http://www.weiduoa.com/"," "agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36", "time": ISODate ("2014-02-28T02:02:29Z")}
{"_ id": ObjectId ("530fee3753357d2437000002"), "host": "192.168.30.1", "user": "-", "method": "GET", "path": "/ api?callback=jQuery172014558692439459264_1393552941397&do=inboxmemberlist&MEMBER_ID=80&os=w&webtoken=1b43342c1f&_=1393552941505", "code": "301"," size ":" 178", "referer": "http://www.weiduoa.com/"," "agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36", "time": ISODate ("2014-02-28T02:02:29Z")}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.