Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed steps for collecting mysql slow query logs by using filebeat

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

The following mainly brings you the detailed steps of collecting mysql slow query logs by using filebeat. I hope these contents can bring you practical use, which is also the main purpose of this article that I edit the detailed steps of collecting mysql slow query logs by using filebeat. All right, don't talk too much nonsense, let's just read the following.

Environment introduction:

Operating system version: CentOS Linux release 7.3.1611 (Core) 64bit

Mysql version: 5.6.28

Logstash version: logstash 5.3.0

Elasticsearch version: elasticsearch 5.3.0

Kibana version: Kibana 5.3.0

Java version: 1.8.0,121

Mysql version: 5.6.28 and mysql slow log

[bash] # Time: 170420 1User@Host: root [root] @ [192.168.1.178] Id: 223889575 Query_time: 3.887598 Lock_time: 0.000099 Rows_sent: 19 Rows_examined: 19SET timestamp=1492623664;select * from users_test;# User@Host: root [root] @ [192.168.1.178] Id: 2238895828 # Query_time: 0.000150 Rows_sent: 28 Rows_examined: 28SET timestamp=1492623664;select * from users_test # Time: 170420 1:41: 12[ / bash]

Filebeat and logstash configuration

Filebeat.yml profile

[bash] filebeat:prospectors:-paths:- / data/mysql/xxx-slow.logdocument_type: mysqlslowmultiline:pattern: "^ # User@Host:" negate: truematch: afterregistry_file: / var/lib/filebeat/registryoutput:logstash:hosts: ["192.168.1.63 User@Host 5044"] [/ bash]

Logstash.conf

[bash] input {beats {port = > 5044} filter {grok {match = > ["message" "(? M) ^ # User@Host:% {USER:query_user}\ [[^\]] +\] @ (?: (?\ S*))?\ [(?:% {IP:query_ip})?\]\ s*Id:% {NUMBER:id:int}\ IP:query_ip # Query_time:% {NUMBER:query_time:float}\ s+Lock_time:% {NUMBER:lock_time:float}\ s+Rows_sent:% {NUMBER : rows_sent:int}\ s+Rows_examined:% {NUMBER:rows_examined:int}\ s * (?: use% {DATA:database} \ s *)? SET timestamp=% {NUMBER:timestamp} Grok {match = > {"message" = > "# Time:"} add_tag = > ["drop"] tag_on_failure = > []} if "drop" in [tags] {drop {} date {match = > ["timestamp", "UNIX" "YYYY-MM-dd HH:mm:ss"] remove_field = > ["timestamp"]} output {elasticsearch {hosts = > "192.168.1.63 type 9200" manage_template = > falseindex = > "% {[@ metadata] [beat]} -% {[type]} -% {+ YYYY.MM.dd}" document_type = > "% {[@ metadata] [type]}"}} [/ bash]

For the above detailed steps about collecting mysql slow query logs by using filebeat, do you find it very helpful? If you need to know more, please continue to follow our industry information. I'm sure you'll like it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report