Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed introduction of mysql slow query function

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains the "mysql slow query function detailed introduction", the article explains the content is simple and clear, easy to learn and understand, the following please follow the editor's train of thought slowly in depth, together to study and learn "mysql slow query function detailed introduction"!

Open mysql slow query log

1. View the current slow query settings

# check slow query time. Default is 10s. It is recommended to reduce it to 1s or less. Mysql > show variables like "long_query_time" +-+-+ | Variable_name | Value | +-+-+ | long_query_time | 1.000000 | +-+-+ 1 row in set (0.00 sec) # View slow check Inquire about configuration mysql > show variables like "slow%" +-- +-- + | Variable_name | Value | +-- -+ | log_slow_admin_statements | OFF | | log_slow_filter | | log_slow_rate_limit | 1 | | log_slow_rate_type | session | | log _ slow_slave_statements | OFF | | log_slow_sp_statements | ON | | log_slow_verbosity | | max_slowlog_files | 0 | max_slowlog_size | 0 | slow _ launch_time | 2 | | slow_query_log | ON | | slow_query_log_always_write_time | 10.000000 | | slow_query_log_file | / tmp/slow_querys.log | | slow_query_log_use_global_control | +-- -- +-- + 14 rows in set (0.01 sec)

The value of slow_query_log is on, which means that the function has been enabled.

two。 How to turn on slow query function

Method 1: find the configuration file my.cnf of mysql on the server, and then append the content to the mysqld module. The advantage is that it will always be effective, but the bad thing is that you need to restart the mysql process.

Vim my.cnf [mysqld] slow_query_log = ON# defines the path to the slow query log slow_query_log_file = / tmp/slow_querys.log# defines how many seconds the query has checked is a slow query. What I define here is 1 second, which is allowed to be set less than 1 second after 5.6. For example, 0.1s long_query_time = records is used to set whether to record a query that does not use an index to a slow query record. If the query is turned off by default, a lot of logs will be generated if the demand is enabled, and the dynamic modification of # log-queries-not-using-indexes management instructions will also be recorded to slow query. For example, OPTIMEZE TABLE and ALTER TABLE are disabled by default. Depending on the demand, a lot of logs will be generated, and # log-slow-admin-statements can be modified dynamically.

Then restart the mysql server, which is to take a look at the slow log with the following command:

Tail-f / tmp/slow_querys.log

Method 2: deal with it by modifying the global variables of mysql. The advantage of this is that you don't have to restart the mysql server and log in to mysql to execute the sql script, but it fails after restart.

# enable slow query, 1: enable, 0: disable mysql > set global slow_query_log=1;#, define how many seconds the query has been checked can be regarded as slow query. What I define here is 1 second, and less than 1 second is allowed after 5.6.For example, 0.1 second mysql > set global long_query_time=1;# defines the path to the slow query log mysql > set global slow_query_log_file='/tmp/slow_querys.log';# shutdown function: set global slow_query_log=0 Then use the following command to check whether mysql > show variables like 'long%'; mysql > show variables like' slow%';# sets slow query records to the table # set global log_output='TABLE'

Of course, you can also combine the two. On the one hand, you don't have to restart the mysql process to take effect, on the other hand, you don't have to be afraid of parameter invalidation after restart, and the effect is the same.

Special attention should be paid to the setting of long_query_time, which is less than 0.1s after 5.6. so the level of detail of the record depends on your own needs. If the database capacity is relatively large, it is still more than 0.1s, so it becomes a bit unreasonable.

Record definition of slow query log

Check mysql's slow log analysis directly. For example, we can view the contents of tail-f slow_query.log.

Tail-f slow_query.log# Time: 110107 16 User@Host 22 User@Host: root [root] @ localhost [] # Query_time: 9.869362 Lock_time: 0.000035 Rows_sent: 1 Rows_examined: 6261774SET timestamp=1294388531;select count (*) from ep_friends

Resolution of field meaning:

The first line, the time when the SQL query was executed

The second line, the connection information, the user and the connection IP that execute the SQL query

The third line records some useful information, which is parsed as follows

Query_time, the longer the execution time of this SQL, the slower it is.

Lock_time, waiting for table lock time in the MySQL server phase (not in the storage engine phase)

Rows_sent, the number of rows returned by the query

Rows_examined, the number of rows checked by the query. Of course, the longer the query, the more time it takes.

The fourth line, which sets the timestamp, has no practical meaning, but only corresponds to the execution time of the first line.

On the fifth line and all subsequent lines (before the second # Time:), the sql statement executed records the information, because the sql can be very long.

Software for analyzing slow query

Although the slow query log is clear enough, often our log records not only one sql, but also many entries. If we do not make statistics, it is estimated that we will have to see the age of the monkey, so we need to do statistical analysis at this time.

Method 1: use the mysqldumpslow command that comes with the mysql program to analyze, for example:

Mysqldumpslow-s c-t 10 / tmp/slow-log

This will output the 10 SQL statements with the largest number of records, and the result is not much different from the format of the general slow query record above, so we will not expand it here to analyze it in detail.

Parameter resolution:

-s: indicates how to sort it. The sub-parameters are as follows:

C, t, l, r: sort by the number of records, time, query time, and the number of records returned

Ac, at, al, ar: indicates the corresponding flashback

-t: how many pieces of data are returned before, which means that 10 pieces of data are returned (or the top ten)

-g: a regular matching pattern can be written later, which is case-insensitive, such as:

/ path/mysqldumpslow-s r-t 10 / tmp/slow-log to get the 10 queries that return the most recordsets.

/ path/mysqldumpslow-s t-t 10-g "left join" / tmp/slow-log, get the first 10 queries sorted by time that contain left joins.

Method 2: statistical analysis was carried out by using pt-query-digest of pt (Percona Toolkit) tool. This is a script written in perl produced by Percona. Only when the pt toolset is installed will it exist. Interested friends should install the pt tool first. Analyze the slow query file directly, and execute as follows:

Pt-query-digest slow_querys.log > t.txt

Because there may still be a lot of sql in the record, it still looks like a lot of effort, so it is recommended to export it to the file. The output information will be divided into three parts

Part I: overall statistical results

# 580ms user time, 0 system time, 35.46M rss, 100.01M vsz# Current date: Wed Jul 19 14:32:40 2017# Hostname: yztserver1# Files: slow_querys.log# Overall: 2.63k total, 36 unique, 0.03 QPS 0.03x concurrency _ # Time range: 2017-07-18T03:17:17 to 2017-07-19T06:30:18# Attribute total min max avg 95% stddev median# = # Exec time 3145s 1s 5s 2s 258ms 1s# Lock time 677ms 0 64ms 257us 260us 2ms 144us # Rows sent 8.44k 0 5.50k 3.29 0.99 108.92 0.99# Rows examine 1.06G 0 2.12M 421.02k 619.64k 155.33k 419.40k# Rows affecte 00 00 00 0# Bytes sent 9.00M 11 6.24M 3.51k 13.78k 119 .76k 65.89# Query size 735.85k 6 2.19k 286.72 463.90 128.05 246.02

Record all the slow query statistics in this slow log file, which is usually fine at a cursory glance:

Overall: how many queries are there in this file? the above example is a total of 2.63k queries, that is, 2.63k slow queries.

Time range: the time range within which the query is executed.

Unique: the unique number of queries, that is, the total number of different queries after parameterization of the query conditions. In this case, 36. In other words, the 2.63K slow queries are actually classified as 36.

Attribute: attribute resolution, other sub-items:

Total: total, min: minimum, max: maximum, avg: average

95%: the number that arranges all the values from small to large, located at 95%. This number is generally the most valuable for reference.

Median: median, arranging all values from small to large, in the middle of the number.

The rest is literal, just translate it.

Part II: query grouping statistical results

# Profile# Rank Query ID Response time Calls R/Call Item# M Item# = # 1 0x8965CC929FB1C7B2 2080.0546 66.1% 1816 1.1454 0.03 SELECT 1 # 2 0x9C57D04CEA1970B4 228.4754 7.3% 131 1.7441 0.10 SELECT 2 # 3 0x94B4D7AA44982464 138.5964 4.4% 112 1.2375 SELECT 0x6BD09392D1D0B5D7 84.1681 2.7% 70 1.2024 0.03 SELECT 4 # 5 0x1E9926677DBA3657 81.2260 2.6% 68 1.1945 0.03 SELECT 5 # 6 0xBBCE31227D8C6A93 69.6594 2.2% 56 1.2439 SELECT 7 0x9A691CB1A14640F4 60.4517 1.9% 51 1.1853 0.04 SELECT 7 # 8 0xDA99A20C8BE81B9C 55.7751 1.8% 46 1.2125 SELECT 8 # 9 0x1F53AC684A365326 55.6378 1.8% 45 1.2364 0.03 SELECT 9 _ # 10 0x664E0C18531443A5 38.6894 1.2% 31 1.2480 1.2480 SELECT way_bill_main # 11 0xF591153EC390D8CA 32.5370 1.0% 28 1.1620 0.01 SELECT way_bill_main # 12 0xA3A82D5207F1BC2E 24.6582 0.8% 20 1.2329 0.06 SELECT way_bill_main # 13 0xFCED276145371CE4 22.2543 0.7% 18 1.2363 SELECT way_bill_main # 14 0x4895DF4252D5A600 21.3184 0 .7% 17 1.2540 0.07 SELECT way_bill_main # 16 0xA4DD833DF8C96D04 14.6107 0.5% 13 1.1239 0.01 SELECT way_bill_main # 17 0x0426A3C3538CBBA8 13.9799 0.4% 13 1.0754 0.00 SELECT way_bill_main # 18 0x2C52F334EF3D8D2D 12.5960 0.4% 10 1.2596 0.03 SELECT way_bill_main # MISC 0xMISC 110.2030 3.5% 83 1.3277

In this part, the queries are parameterized and grouped, and then the execution of all kinds of queries are analyzed. The results are sorted according to the total execution time, from large to small. I'm sorry to change the display.

Response: total response time.

Time: the total time percentage of this query in this analysis.

Calls: the number of times executed, that is, the total number of query statements of this type in this analysis.

R/Call: average response time per execution.

Item: query object

Part III: detailed statistical results of each query, this is one of them

# Query 1: 0.02 QPS, 0.02x concurrency ID 0x8965CC929FB1C7B2 at byte 86860 This item is included in the report because it matches-- limit.# Scores: v Time range M = 0.09 Time range: 2017-07-18T03:17:56 to 2017-07-19T06:30:18# Attribute pct total min max avg 95% stddev median# = # Count 69 18 cycles Exec time 66 2080s 1s 4s 1s 1s 189ms 1s# Lock time 51 349ms 67us 19ms 192us 194us 760us 144us# Rows sent 21 1.77k 1 1 1 0 1# Rows examine 71 771.37M 262.54k 440.03k 434.96k 419.40k 24.34k 419.40k# Rows affecte 0 0 0 0# Bytes sent 1 120.49k 65 68 67.94 65.89 0.35 65.89# Query size 60 443.31k 248 250 249.97 246.02 0.00 246.02# String:# Databases ytttt# Hosts 10.25.28.2# Last errno 0# Users gztttttt# Query_time distribution# 1us# 10us# 100us# 1ms# 10ms# 100ms# 1s # # 10s+# Tables# SHOW TABLE STATUS FROM `ytttt`LIKE 'way_bill_main'\ G# SHOW CREATE TABLE `ytttt`.`way _ bill_ main`\ G# SHOW TABLE STATUS FROM `ytt`LIKE' scheduler_task'\ G# SHOW CREATE TABLE `ytttt`.uploer _ task`\ G# EXPLAIN / * ! 50100 PARTITIONS*/SELECT COUNT (1) FROM 1 as w inner join. It is omitted here.

The upper part of this section is similar to the information in the first part, which counts the total running efficiency of the recorded sql. The following part is parsed as follows:

Databases: library name

Users: the number of times each user executes (as a percentage), now there is only one user, because what I authorize is a library and an independent user.

Query_time distribution: query time distribution. The length reflects the interval percentage. In this case, it is basically 1s.

Tables: the tables involved in the query

Explain: an example, that is, the message of the sql itself.

The rest of the information is generally similar to this, except that it displays different sql information, all of which belong to this third part.

-

Pt-query-digest parameter description:

-- create-review-table when you use the-- review parameter to output the analysis results to a table, it is automatically created if there is no table.

-- create-history-table when you use the-- history parameter to output the analysis results to a table, it is automatically created if there is no table.

-- filter matches and filters the input slow query according to the specified string and then analyzes it

-- limit limits the percentage or number of output results. The default value is 20, that is, the output of the slowest 20 statements. If it is 50%, it is sorted by the proportion of the total response time from the largest to the lowest, and the total output reaches the 50% position.

-- host MySQL server address

-- user mysql user name

-- password mysql user password

-- history saves the analysis results to a table, and the analysis results are more detailed. The next time you use-history, if the same statement exists and the time interval of the query is different from that in the history table, it will be recorded in the data table. You can compare the historical changes of a certain type of query by querying the same CHECKSUM.

-- review saves the analysis results to the table. This analysis only parameterizes the query conditions, and it is relatively simple to query one record for each type of query. The next time you use-- review, if the same statement parsing exists, it will not be recorded in the data table.

-- the output type of output analysis results. The values can be report (standard analysis report), slowlog (Mysql slowlog), json, json-anon. Generally, report is used for ease of reading.

-- when does since start to analyze. The value is a string. It can be a specified time point in a "yyyy-mm-dd [hh:mm:ss]" format, or a simple time value: s (seconds), h (hours), m (minutes), d (days). For example, 12 hours means that statistics started 12 hours ago.

-- until deadline. With-since, you can analyze slow queries over a period of time.

Examples of other commands:

1. Analysis of queries in the last 12 hours:

Pt-query-digest-- since=12h slow.log > slow_report2.log

two。 Analyze queries within a specified time range:

Pt-query-digest slow.log-- since '2014-04-17 09 until' 2014-04-17 10 until'> slow_report3.log

3. Parsing refers to slow queries with select statements

Pt-query-digest--filter'$event- > {fingerprint} = ~ m / ^ select / I 'slow.log > slow_report4.log

4. Slow query for a user

Pt-query-digest--filter'($event- > {user} | | ") = ~ m / ^ root / i' slow.log > slow_report5.log

5. Query all full table scans or full join slow queries

Pt-query-digest--filter'(($event- > {Full_scan} | | "") eq "yes") | | ($event- > {Full_join} | | "") eq "yes") 'slow.log > slow_report6.log

6. Save the query to the query_ table

Pt-query-digest-- user=root-password=abc123-- review hobbies localhost slow.log slow.log

7. Save the query to the query_ query table

Pt-query-digest-- user=root-password=abc123-- review hobbies localhost _ history--create-review-table slow.log_20140401

Pt-query-digest-- user=root-password=abc123--review hobbies localhostDelitedpaper thumbquerylocalhostwafletable slow.log_20140402

8. Grab the tcp protocol data of mysql through tcpdump, and then analyze

Tcpdump-s 65535-x-nn-Q-tttt-I any-c 1000 port 3306 > mysql.tcp.txt

Pt-query-digest-- type tcpdump mysql.tcp.txt > slow_report9.log

9. Analyze binlog

Mysqlbinlog mysql-bin.000093 > mysql-bin000093.sql

Pt-query-digest-- type=binlog mysql-bin000093.sql > slow_report10.log

10. Analyze general log

Pt-query-digest-- type=genlog localhost.log > slow_report11.log

-

Personal opinion:

In fact, pt-query-digest although a lot of information, but the output of useful information is not necessarily much better than mysqldumpslow, but dazzling, but also to install multiple tools to use. However, it is also a good thing to throw the problem to the developer to see how poor the efficiency is. It can be said that how many slow queries have been executed by that sql, so the actual use is still a matter of opinion and it is up to you.

Additional instructions:

1. In a few cases, mysql slow query records may not achieve the purpose.

This is due to mysql's slow query recording mechanism, because mysql only records slow queries in the engine phase to slow logs, such as network delays (e.g. across computer rooms), IO transmission delays (e.g. frequent reads and writes), client delays (e.g. jdbc high load), and individual internal resource lock waits, which may lead to increased query delays, but may not necessarily be recorded in slow logs.

So there is likely to be a strange situation, obviously the query is really slow, but the slow log of mysql is not recorded. At this time, we need to have a broader way of thinking.

two。 If slow log still can not solve the problem, it is recommended to open query log general-log to track sql.

Roughly similar to the above operation, check the current status first

Show variables like 'general%'

Can be added in my.cnf

General-log = 1 on (0 off)

Log = / log/mysql_query.log path

You can also change it that way by setting a variable

Set global general_log=1 on (0 off)

Thank you for your reading, the above is the content of "detailed introduction of mysql slow query function". After the study of this article, I believe you have a deeper understanding of the detailed introduction of mysql slow query function, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report