In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article is about how to use pt-query-digest to analyze logs. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
1. Introduction to tools
Pt-query-digest is a tool for analyzing mysql slow queries. It can analyze binlog, General log, slowlog, SHOWPROCESSLIST or MySQL protocol data crawled by tcpdump. The analysis results can be output to a file, and the analysis process is to parameterize the conditions of the query statement first, and then group the queries after parameterization to calculate the execution time, times, proportion and so on. We can use the analysis results to find out the problem and optimize it. Pt-query-digest is a perl script that can be executed simply by downloading and empowering it. [root@test1] # wget percona.com/get/pt-query-digest [root@test1] # chmod uplix pt-query-digest
2. Tool usage syntax
Pt-query-digest [OPTIONS] [FILES] [DSN]
-- create-review-table when you use the-- review parameter to output the analysis results to a table, it is automatically created if there is no table.
-- create-history-table when you use the-- history parameter to output the analysis results to a table, it is automatically created if there is no table.
-- filter matches and filters the input slow query according to the specified string and then analyzes it
-- limit limits the percentage or number of output results. The default value is 20, that is, the output of the slowest 20 statements. If it is 50%, it is sorted by the proportion of the total response time from the largest to the lowest, and the total output reaches the 50% position.
-- host mysql server address
-- user mysql user name
-- password mysql user password
-- history saves the analysis results to a table, and the analysis results are more detailed. The next time you use-history, if the same statement exists and the time interval of the query is different from that in the history table, it will be recorded in the data table. You can compare the historical changes of a certain type of query by querying the same CHECKSUM.
-- review saves the analysis results to the table. This analysis only parameterizes the query conditions, and it is relatively simple to query one record for each type of query. The next time you use-- review, if the same statement parsing exists, it will not be recorded in the data table.
-- the output type of output analysis results. The values can be report (standard analysis report), slowlog (Mysql slowlog), json, json-anon. Generally, report is used for ease of reading.
-- when does since start to analyze. The value is a string. It can be a specified time point in a "yyyy-mm-dd [hh:mm:ss]" format, or a simple time value: s (seconds), h (hours), m (minutes), d (days). For example, 12 hours means that statistics started 12 hours ago.
-- until deadline. With-since, you can analyze slow queries over a period of time.
3. Analysis of the analysis report
Part I: overall statistical results, as shown in the following figure
Overall: how many queries are there? the above example is a total of 266 queries. Time range: the time range within which the query is executed. Unique: the unique number of queries, that is, the total number of different queries after the query conditions have been parameterized, in this case 55. Total: total min: minimum max: maximum avg: average 95%: the number that arranges all values from small to large, with the position at 95%, which is generally the most valuable for reference. Median: median, arranging all values from small to large, in the middle of the number.
Part II: query grouping statistical results, as shown in the following figure
As can be seen from the above figure, this part parameterizes and groups the queries, and then analyzes the execution of all kinds of queries, and the results are sorted according to the total execution time, from large to small. Response: total response time. Time: the total time percentage of this query in this analysis. Calls: the number of times executed, that is, the total number of query statements of this type in this analysis. R/Call: average response time per execution. Item: query object
Part III: detailed statistical results of each query, as shown in the following figure:
As can be seen from the above figure, the detailed statistical results of query No. 1, the top table lists the statistics of the number of times of execution, maximum, minimum, average, 95% and so on. Databases: library name Users: number of times executed by each user (percentage) Query_time distribution: query time distribution, length reflects interval proportion. In this case, the number of queries between 1s-10s is several times that of more than 10 seconds. Tables: tables involved in the query Explain: exampl
4. Usage example (1) analyze the slow query file pt-query-digest slow.log > slow_report.log directly
(2) analyze the queries in the last 12 hours
Pt-query-digest-- since=12h slow.log > slow_report2.log
(3) analyze queries within a specified time range
Pt-query-digest slow.log-- since '2014-04-17 09 until' 2014-04-17 10 until'> slow_report3.log
(4) analyze slow queries with only select statements
Pt-query-digest--filter'$event- > {fingerprint} = ~ m / ^ select / I 'slow.log > slow_report4.log
(5) slow query for a user
Pt-query-digest--filter'($event- > {user} | | ") = ~ m / ^ root / i' slow.log > slow_report5.log
(6) query all full table scans or full join slow queries
Pt-query-digest--filter'(($event- > {Full_scan} | | "") eq "yes") | | ($event- > {Full_join} | | "") eq "yes") 'slow.log > slow_report6.log
(7) Save the query to the query_ table
Pt-query-digest-- user=root-password=abc123-- review hobbies localhost slow.log slow.log
(8) Save the query to the query_ query table
Pt-query-digest-- user=root-password=abc123-- review hobbies localhost _ history--create-review-table slow.log_20140401
Pt-query-digest-- user=root-password=abc123--review hobbies localhostDelitedpaper thumbquerylocalhostwafletable slow.log_20140402
(9) grab the tcp protocol data of mysql through tcpdump, and then analyze
Tcpdump-s 65535-x-nn-Q-tttt-I any-c 1000 port 3306 > mysql.tcp.txt
Pt-query-digest-- type tcpdump mysql.tcp.txt > slow_report9.log
(10) analyze binlog
Mysqlbinlog mysql-bin.000093 > mysql-bin000093.sql
Pt-query-digest-- type=binlog mysql-bin000093.sql > slow_report10.log
(11) analyze general log
Pt-query-digest-- type=genlog localhost.log > slow_report11.log
Thank you for reading! This is the end of the article on "how to use the pt-query-digest Analysis Log". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.