In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
ELK 5.0.1+Filebeat5.0.1 monitors MongoDB logs in real time and uses sample analysis of regular parsing mongodb logs. To solve this problem, this article introduces the corresponding analysis and solutions in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
For the installation and deployment of ELK5.0.1, please refer to the blog post (ELK5.0.1 + Filebeat5.0.1 for LINUX RHEL6.6 monitoring MongoDB log)
This paper focuses on how to apply filebeat to real-time monitoring mongodb database logs and regularly parsing mongodb logs in logstash.
After ELK5.0.1 is deployed, deploy filebeat on the database server that needs to monitor the mongodb logs to grab the logs
First, you need to modify the filebeat configuration file:
[root@se122 filebeat-5.0.1] # pwd
/ opt/filebeat-5.0.1
[root@se122 filebeat-5.0.1] #
[root@se122 filebeat-5.0.1] # ls
Data filebeat filebeat.full.yml filebeat.template-es2x.json filebeat.template.json filebeat.yml scripts
[root@se122 filebeat-5.0.1] # cat filebeat.yml
Filebeat:
Prospectors:
-
Paths:
-/ root/rs0-0.log # filebeat mongodb logs responsible for real-time monitoring
Document_type: mongodblog # specifies that the document type of the mongodb log sent by filebeat to logstash is document_type. Be sure to specify (logstash receive parsing match to use)
Input_type: log
Registry_file:
/ opt/filebeat-5.0.1/data/registry
Output.logstash:
Hosts: ["10.117.194.228 IP 5044"] # the IP address of the machine deployed by the logstash service and the service port number on which it is running
[root@se122 filebeat-5.0.1] #
Secondly, modify the logstash configuration file:
[root@rhel6 config] # pwd
/ opt/logstash-5.0.1/config
[root@rhel6 config] # cat logstash_mongodb.conf
# input {
# stdin {}
#}
Input {
Beats {
Host = > "0.0.0.0"
Port = > 5044
Type = > mongodblog # specifies that the log type entered by filebeat is mongodblog
}
}
Filter {
If [type] = "mongodblog" {# filter, which only handles mogodblog log data sent by filebeat
Grok {# parse the sent mognodblog log
Match = > ["message", "% {TIMESTAMP_ISO8601:timestamp}\ GREEDYDATA:body% {MONGO3_SEVERITY:severity}\ s% {MONGO3_COMPONENT:component}\ s + (?:\ [% {DATA:context}\])?\% {GREEDYDATA:body}"]
}
If [component] = ~ "WRITE" {
Grok {# the second layer parses the body part and extracts the command_type, db_name, command and spend_time fields in the mongodblog
Match = > ["body", "% {WORD:command_type}\ s% {DATA:db_name}\ s +\:\ s% {GREEDYDATA:command}% {INT:spend_time} ms$"]
}
} else {
Grok {
Match = > ["body", "\ GREEDYDATA:command% {DATA:db_name}\ s +\ w +\:\ s% {WORD:command_type}\ s% {GREEDYDATA:command} protocol.*% {INT:spend_time} ms$"]
}
}
Date {
Match = > ["timestamp", "UNIX", "YYYY-MM-dd HH:mm:ss", "ISO8601"]
Remove_field = > ["timestamp"]
}
}
}
Output {
Elasticsearch {
Hosts = > ["192.168.144.230 virtual 9200"]
Index = > "mongod_log-% {+ YYYY.MM}"
}
Stdout {
Codec = > rubydebug
}
}
[root@rhel6 config] #
Then, make sure that all the service processes on the ELK server are turned on, and start the command:
[elasticsearch@rhel6] $/ home/elasticsearch/elasticsearch-5.0.1/bin/elasticsearch
[root@rhel6] # / opt/logstash-5.0.1/bin/logstash-f / opt/logstash-5.0.1/config/logstash_mongodb.conf
[root@rhel6 ~] # / opt/kibana-5.0.1/bin/kibana
Start filebeat on the remote side and start monitoring the mongodb log:
[root@se122 filebeat-5.0.1] # / opt/filebeat-5.0.1/filebeat-e-c / opt/filebeat-5.0.1/filebeat.yml-d "Publish"
2017-02-16 05:50:40.931969 beat.go:264: INFO Home path: [/ opt/filebeat-5.0.1] Config path: [/ opt/filebeat-5.0.1] Data path: [/ opt/filebeat-5.0.1/data] Logs path: [/ opt/filebeat-5.0.1/logs]
2017-02-16 05:50:40.932036 beat.go:174: INFO Setup Beat: filebeat; Version: 5.0.1
2017-02-16 05:50:40.932167 logp.go:219: INFO Metrics logging every 30s
2017-02-16 05:50:40.932227 logstash.go:90: INFO Max Retries set to: 3
2017-02-16 05:50:40.932444 outputs.go:106: INFO Activated logstash as output plugin.
2017-02-16 05:50:40.932594 publish.go:291: INFO Publisher name: se122
2017-02-16 05:50:40.935437 async.go:63: INFO Flush Interval set to: 1s
2017-02-16 05:50:40.935473 async.go:64: INFO Max Bulk Size set to: 2048
2017-02-16 05:50:40.935745 beat.go:204: INFO filebeat start running.
2017-02-16 05:50:40.935836 registrar.go:66: INFO Registry file set to: / opt/filebeat-5.0.1/data/registry
2017-02-16 05:50:40.935905 registrar.go:99: INFO Loading registrar data from / opt/filebeat-5.0.1/data/registry
2017-02-16 05:50:40.936717 registrar.go:122: INFO States Loaded from registrar: 1
2017-02-16 05:50:40.936771 crawler.go:34: INFO Loading Prospectors: 1
2017-02-16 05:50:40.936860 prospector_log.go:40: INFO Load previous states from registry into memory
2017-02-16 05:50:40.936923 registrar.go:211: INFO Starting Registrar
2017-02-16 05:50:40.936939 sync.go:41: INFO Start sending events to output
2017-02-16 05 spool_size 50 spooler.go:64 40.937148: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-02-16 05:50:40.937286 prospector_log.go:67: INFO Previous states loaded: 1
2017-02-16 05:50:40.937404 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1
2017-02-16 05:50:40.937440 crawler.go:61: INFO All prospectors are initialised and running with 1 states to persist
2017-02-16 05:50:40.937478 prospector.go:106: INFO Starting prospector of type: log
2017-02-16 05VOV 50VOR 40.937745 log.go:84: INFO Harvester started for file: / root/rs0-0.log
We can see that the real-time monitoring of mongodb log is / root/rs0-0.log. Then, we go to the foreground window opened by logstash and see the following information:
{
"severity" = > "I"
"offset" = > 243843239
"spend_time" = > "0"
"input_type" = > "log"
"source" = > "/ root/rs0-0.log"
"message" = > "2017-02-04T14:03:30.025+0800 I COMMAND [conn272] command admin.$cmd command: replSetGetStatus {replSetGetStatus: 1} keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks: {} protocol:op_query 0ms"
"type" = > "mongodblog"
"body" = > "command admin.$cmd command: replSetGetStatus {replSetGetStatus: 1} keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks: {} protocol:op_query 0ms"
"command" = > "{replSetGetStatus: 1} keyUpdates:0 writeConflicts:0 numYields:0 reslen:364 locks: {}"
"tags" = > [
[0] "beats_input_codec_plain_applied"
]
"component" = > "COMMAND"
"@ timestamp" = > 2017-02-04T06:03:30.025Z
"db_name" = > "admin.$cmd"
"command_type" = > "replSetGetStatus"
"@ version" = > "1"
"beat" = > {
"hostname" = > "se122"
"name" = > "se122"
"version" = > "5.0.1"
}
"host" = > "se122"
"context" = > "conn272"
}
This shows that logstash filters normally according to the configuration file and parses the mongodblog log according to the specified rules, and then creates the index to kibana:
Then, you can see the monitored Mongodb log in the kibana custom view:
This is the answer to the sample analysis question about ELK 5.0.1+Filebeat5.0.1 real-time monitoring MongoDB log and using regular parsing mongodb log. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.