Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Use Logstash to collect MongoDB logs and alarm through Zabbix

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

An application scenario description

In some cases, it is not enough to monitor the port and various states of MongoDB through Zabbix, and log monitoring of MongoDB is also important. For example, the Shard at the backend of the Mongos connection reports a SocketException error.

Second, use Logstash to analyze MongoDB logs

To record slow query, you first need to turn on the slow query record function.

Use jd05;db.setProfilingLevel (1BI 50) {"was": 1, "slowms": 50, "ok": 1}

1 means that only slow queries are recorded, and operations slower than 50 milliseconds will be recorded.

If written as 2, all operations will be recorded. It is not recommended to use it in the production environment, but can be used in the development environment.

Db.setProfilingLevel (2)

The following operation information is recorded in the log file of MongoDB:

Mon Apr 27 16 locks 45 player 01.853 [conn282854698] command jd01.$cmd command: {count: "player", query: {request_time: {$gte: 1430123701} ntoreturn:1 keyUpdates:0 numYields: 7 locks (micros) rrange 640822 reslen:48 340ms

The logstash configuration file shipper_mongodb.conf is as follows

Input {file {path = > "/ data/app_data/mongodb/log/*.log" type = > "mongodb" sincedb_path = > "/ dev/null"}} filter {if [type] = = "mongodb" {grok {match = > ["message" (? M)% {GREEDYDATA}\ [conn% {NUMBER:mongoConnection}\]% {WORD:mongoCommand}% {WORD:mongoDatabase}.% {NOTSPACE:mongoCollection}% {WORD}:\ {% {GREEDYDATA:mongoStatement}\}% {GREEDYDATA}% {NUMBER:mongoElapsedTime:int} ms "] add_tag = >" mongodb "} grok {match = > [" message " "cursorid:% {NUMBER:mongoCursorId}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message", "ntoreturn:% {NUMBER:mongoNumberToReturn:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message" "ntoskip:% {NUMBER:mongoNumberToSkip:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message", "nscanned:% {NUMBER:mongoNumberScanned:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message" "scanAndOrder:% {NUMBER:mongoScanAndOrder:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message", "idhack:% {NUMBER:mongoIdHack:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message" "nmoved:% {NUMBER:mongoNumberMoved:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message", "nupdated:% {NUMBER:mongoNumberUpdated:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message" "keyUpdates:% {NUMBER:mongoKeyUpdates:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message", "numYields:% {NUMBER:mongoNumYields:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message" "locks\ (micros\) r NUMBER:mongoWriteLocks:int% {NUMBER:mongoReadLocks:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message", "locks\ (micros\) wv% {NUMBER:mongoWriteLocks:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message" "nreturned:% {NUMBER:mongoNumberReturned:int}"] add_tag = > "mongo_profiling_data"} grok {match = > ["message" "reslen:% {NUMBER:mongoResultLength:int}"] add_tag = > "mongo_profiling_data"} if "mongo_profiling_data" in [tags] {mutate {remove_tag = > "_ grokparsefailure"}} if "_ grokparsefailure" in [tags] {grep {match = > ["message" "(Failed | error | SOCKET)"] add_tag = > ["zabbix-sender"] add_field = > ["zabbix_host", "% {host}", "zabbix_item", "mongo.error" # "send_field" "% {message}"]} mutate {remove_tag = > "_ grokparsefailure"} output {stdout {codec = > "rubydebug"} zabbix {tags = > "zabbix-sender" host = > "zabbixserver" port = > "10051" zabbix_sender = > "/ usr/local" / zabbix/bin/zabbix_sender "} redis {host = >" 10.4.29.162 "data_type = >" list "key = >" logstash "}}

The configuration file is divided into several steps:

Use logstash's file plug-in to read the mongodb log file from the / data/app_data/mongodb/log/ directory and then parse the log contents

If there are keywords such as cursorid,nreturned in the log file, intercept and tag mongo_profiling_data for later data statistics.

For other logs, filter keywords to see if they contain error messages, and if so, send an alarm through zabbix.

Note that when you use the zabbix plug-in to send an alarm, you need to filter the keyword first, and then you need to have three fields of zabbix_host,zabbix_item,zabbix_field. The value of zabbix_item needs to correspond to the item configured on the zabbix monitoring page. If zabbix_field is not specified, the default is to send this message field.

Add a template for zabbix

In the same way, you can send an alarm to PHP-FPM,Nginx,Redis,MySQL and others through zabbix.

And then all you have to do is define different charts according to different fields.

Reference documentation:

Http://techblog.holidaycheck.com/profiling-mongodb-with-logstash-and-kibana/

Http://tech.rhealitycheck.com/visualizing-mongodb-profiling-data-using-logstash-and-kibana/

Http://www.logstash.net/docs/1.4.2/outputs/zabbix

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report