In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces how to install and use pt-query-digest in mysql. It is very detailed and has certain reference value. Friends who are interested must finish reading it.
Pt-query-digest installation and use
I. brief introduction
Pt-query-digest is a tool for analyzing mysql slow queries. It can analyze binlog, General log, slowlog, SHOWPROCESSLIST or MySQL protocol data crawled by tcpdump. The analysis results can be output to a file, and the analysis process is to parameterize the conditions of the query statement first, and then group the queries after parameterization to calculate the execution time, times, proportion and so on. We can use the analysis results to find out the problem and optimize it.
Second, install pt-query-digest
1. Download page:
Https://www.percona.com/doc/percona-toolkit/3.0/installation.html
two。 Configure the yum library
Configuring Percona Repository
Enabling Testing and Experimental Repositories
Yum install http://www.percona.com/downloads/percona-release/redhat/0.1-6/percona-release-0.1-6.noarch.rpm
3.yum installation
Yum install percona-toolkit
Install to: / usr/bin by default
[root@tqsrv122 yum] # ls / usr/bin/pt-*
/ usr/bin/pt-align / usr/bin/pt-ioprofile / usr/bin/pt-slave-delay
/ usr/bin/pt-archiver / usr/bin/pt-kill / usr/bin/pt-slave-find
/ usr/bin/pt-config-diff / usr/bin/pt-mext / usr/bin/pt-slave-restart
/ usr/bin/pt-deadlock-logger / usr/bin/pt-mongodb-query-digest / usr/bin/pt-stalk
/ usr/bin/pt-diskstats / usr/bin/pt-mongodb-summary / usr/bin/pt-summary
/ usr/bin/pt-duplicate-key-checker / usr/bin/pt-mysql-summary / usr/bin/pt-table-checksum
/ usr/bin/pt-fifo-split / usr/bin/pt-online-schema-change / usr/bin/pt-table-sync
/ usr/bin/pt-find / usr/bin/pt-pmp / usr/bin/pt-table-usage
/ usr/bin/pt-fingerprint / usr/bin/pt-query-digest / usr/bin/pt-upgrade
/ usr/bin/pt-fk-error-logger / usr/bin/pt-secure-collect / usr/bin/pt-variable-advisor
/ usr/bin/pt-heartbeat / usr/bin/pt-show-grants / usr/bin/pt-visual-explain
/ usr/bin/pt-index-usage / usr/bin/pt-sift
4. Brief introduction to the usage of each tool (details: https://www.percona.com/doc/percona-toolkit/3.0/index.html)
(1) Analysis and statistics of slow query logs
Pt-query-digest / usr/local/mysql/data/slow.log
(2) Server Summary
Pt-summary
(3) Server disk monitoring
Pt-diskstats
(4) mysql service status summary
Pt-mysql-summary-user=root-password=root
Third, pt-query-digest grammar and important options
Pt-query-digest [OPTIONS] [FILES] [DSN]
-- create-review-table when you use the-- review parameter to output the analysis results to a table, it is automatically created if there is no table.
-- create-history-table when you use the-- history parameter to output the analysis results to a table, it is automatically created if there is no table.
-- filter matches and filters the input slow query according to the specified string and then analyzes it
-- limit limits the percentage or number of output results. The default value is 20, that is, the output of the slowest 20 statements. If it is 50%, it is sorted by the proportion of the total response time from the largest to the lowest, and the total output reaches the 50% position.
-- host mysql server address
-- user mysql user name
-- password mysql user password
-- history saves the analysis results to a table, and the analysis results are more detailed. The next time you use-history, if the same statement exists and the time interval of the query is different from that in the history table, it will be recorded in the data table. You can compare the historical changes of a certain type of query by querying the same CHECKSUM.
-- review saves the analysis results to the table. This analysis only parameterizes the query conditions, and it is relatively simple to query one record for each type of query. The next time you use-- review, if the same statement parsing exists, it will not be recorded in the data table.
-- the output type of output analysis results. The values can be report (standard analysis report), slowlog (Mysql slowlog), json, json-anon. Generally, report is used for ease of reading.
-- when does since start to analyze. The value is a string. It can be a specified time point in a "yyyy-mm-dd [hh:mm:ss]" format, or a simple time value: s (seconds), h (hours), m (minutes), d (days). For example, 12 hours means that statistics started 12 hours ago.
-- until deadline. With-since, you can analyze slow queries over a period of time.
4. Analyze the output of pt-query-digest
Part I: overall statistical results
Overall: how many queries are there altogether
Time range: the time range within which the query is executed
Unique: unique number of queries, that is, how many different queries are there after the query conditions are parameterized
Total: total min: minimum max: maximum avg: average
95%: the number that arranges all the values from small to large, located at 95%. This number is generally the most valuable for reference.
Median: median, arranging all values from small to large, with the number in the middle
# user time, system time, physical memory footprint and virtual memory footprint of the tool to perform log analysis
# 340ms user time, 140ms system time, 23.99M rss, 203.11M vsz
# tool execution time
# Current date: Fri Nov 25 02:37:18 2016
# hostname where the analysis tool is running
# Hostname: localhost.localdomain
# File name to be analyzed
# Files: slow.log
# Total number of statements, unique number of statements, QPS, concurrency
# Overall: 2 total, 2 unique, 0.01 QPS, 0.01x concurrency _
# time range of logging
# Time range: 2016-11-22 06:06:18 to 06:11:40
# attribute total minimum maximum average 95% standard medium
# Attribute total min max avg 95% stddev median
# = =
# statement execution time
# Exec time 3s 640ms 2s 1s 2s 999ms 1s
# Lock occupancy time
# Lock time 1ms 0 1ms 723us 1ms 1ms 723us
# number of rows sent to the client
# Rows sent 5 1 4 2.50 4 2.12 2.50
# select statement scan rows
# Rows examine 186.17k 0 186.17k 93.09k 186.17k 131.64k 93.09k
# number of characters for the query
# Query size 455 15 440 227.50 440 300.52 227.50
Part II: query grouping statistical results
Rank: ranking of all statements, sorted by query time in descending order by default, specified by-- order-by
Query ID: the ID of the statement, (remove extra spaces and text characters, calculate the hash value)
Response: total response time
Time: the total time percentage of this query in this analysis
Calls: the number of times executed, that is, the total number of query statements of this type in this analysis
R/Call: average response time per execution
VBG M: the ratio of response time Variance-to-mean
Item: query object
# Profile
# Rank Query ID Response time Calls R/Call V/M Item
# =
# 1 0xF9A57DD5A41825CA 2.0529 76.2% 1 2.0529 0.00 SELECT
# 2 0x4194D8F83F4F9365 0.6401 23.8% 1 0.6401 0.00 SELECT wx_member_base
Part III: detailed statistical results of each query
From the detailed statistical results of the query below, the top table lists the statistics of the number of times of execution, maximum, minimum, average, 95% and so on.
ID: the ID number of the query, which corresponds to the Query ID in the figure above
Databases: database name
Users: number of times performed by each user (percentage)
Query_time distribution: query time distribution, length reflects the interval proportion. In this case, the number of queries between 1s-10s is twice as large as that of more than 10 seconds.
Tables: the tables involved in the query
Explain:SQL statement
# Query 1: 0 QPS, 0x concurrency, ID 0xF9A57DD5A41825CA at byte 802 _
# This item is included in the report because it matches-limit.
# Scores: Vmax M = 0.00
# Time range: all events occurred at 2016-11-22 06:11:40
# Attribute pct total min max avg 95% stddev median
# =
# Count 50 1
# Exec time 76 2s 2s 2s 0 2s
# Lock time 0 0 0
# Rows sent 20 1 1 1 0 1
# Rows examine 0 0 0
# Query size 3 15 15 15 0 15
# String:
# Databases test
# Hosts 192.168.8.1
# Users mysql
# Query_time distribution
# 1us
# 10us
# 100us
# 1ms
# 10ms
# 100ms
# 1s #
# 10s+
# EXPLAIN / *! 50100 PARTITIONS*/
Select sleep (2)\ G
5. Examples of usage
1. Direct analysis of slow query files:
Pt-query-digest slow.log > slow_report.log
two。 Analysis of queries in the last 12 hours:
Pt-query-digest-- since=12h slow.log > slow_report2.log
3. Analyze queries within a specified time range:
Pt-query-digest slow.log-- since '2017-01-07 09 until'-- until '2017-01-07 10 until' > slow_report3.log
4. Parsing refers to slow queries with select statements
Pt-query-digest-- filter'$event- > {fingerprint} = ~ m / ^ select / I 'slow.log > slow_report4.log
5. Slow query for a user
Pt-query-digest-- filter'($event- > {user} | | ") = ~ m / ^ root / I 'slow.log > slow_report5.log
6. Query all full table scans or full join slow queries
Pt-query-digest-- filter'(($event- > {Full_scan} | | "") eq "yes") | (($event- > {Full_join} | | "") eq "yes") 'slow.log > slow_report6.log
7. Save the query to the query_ table
Pt-query-digest-- user=root-password=abc123-- review hobbies localhost slow.log slow.log
8. Save the query to the query_ query table
Pt-query-digest-- user=root-password=abc123-- review hobbies localhost slow.log_0001 slow.log_0001
Pt-query-digest-- user=root-password=abc123-- review hobbies localhost slow.log_0002 slow.log_0002
9. Grab the tcp protocol data of mysql through tcpdump, and then analyze
Tcpdump-s 65535-x-nn-Q-tttt-I any-c 1000 port 3306 > mysql.tcp.txt
Pt-query-digest-- type tcpdump mysql.tcp.txt > slow_report9.log
10. Analyze binlog
Mysqlbinlog mysql-bin.000093 > mysql-bin000093.sql
Pt-query-digest-- type=binlog mysql-bin000093.sql > slow_report10.log
11. Analyze general log
Pt-query-digest-- type=genlog localhost.log > slow_report11.log
The above is all the contents of the article "how to install and use pt-query-digest in mysql". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.