Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of MySQL slow query Log

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "MySQL slow query log example analysis". In daily operation, I believe many people have doubts in MySQL slow query log example analysis. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "MySQL slow query log example analysis". Next, please follow the editor to study!

Slow query log open slow query log

Slow query logs, as the name implies, record logs that execute slow queries.

Check whether the slow query log is enabled:

Show variables like'% slow%'

Open the slow query log. To modify MySQL's configuration file, my.cn is usually under the / etc directory, and restart MySQL after adding the following three lines of configuration.

Slow_query_log = ONslow_launch_time = 2 slow_query_log_file = / usr/local/mysql/data/slow_query.log

Slow_launch_time can only be accurate to seconds. If you need to be more precise, you can use some third-party tools such as pt-query-digest described later.

Note: the MySQL version I use here is 5.6. different versions of MySQL have different configurations for enabling slow queries. For example, some versions before 5.6 are ong_query_time, long_query_time and log-slow-queries. You can first execute show variables like'% slow%'; on the terminal to see what the current version is configured.

You can also open the slow query log at the terminal by setting the global variable:

Set @ @ global.slow_query_log = ON; saves slow query logs to the table

MySQL supports saving slow query logs to the mysql.slow_log table. With @ @ global.log_output, you can set the default that TABLE,FILE and TABLE can only be used at the same time.

Set @ @ global.log_output='TABLE'

We can use the following statement to simulate a slow query:

Select sleep (10)

Slow query log analysis

1. You can use the mysqldumpslow tool that comes with MySQL. It's easy to use, and you can follow-help to see the specific usage.

#-s: sort method. C, t, l, r indicate the number of records, time, query time, and the order of the number of records returned; # ac, at, al, ar indicate the corresponding flashback; #-t: return how many items of data; #-g: what is included, case-insensitive Mysqldumpslow-s r-t 10 / slowquery.log # the top 10 statements recorded by slow mysqldumpslow-s t-t 10-g "left join" / slowquery.log # in chronological order with "left join" in the top 10

2. You can import it into the mysql.slow_ query table and analyze it through the sql statement.

3. Use third-party tools, which will be described below.

Percona Toolkit introduction

Percona-toolkit is a collection of advanced command-line tools for performing a variety of very complex and cumbersome mysql and system tasks by hand. These tasks include:

Check the consistency of master and slave data

File records effectively

Find duplicate indexes

Summarize the server information

Analyze queries from logs and tcpdump

Collect important system information when something goes wrong with the system

Installation

It is very easy to install percona-toolkit. Download the .tar.gz package from the official website:

Wget percona.com/get/percona-toolkit.tar.gz tar-zxvf percona-toolkit-2.2.5.tar.gz

Then execute the following command in turn:

Perl Makefile.PL make make test make install

The default will be installed in the / usr/local/bin directory. Execute man percona-toolkit to see which tools are installed.

Running the tool may encounter the following error:

This is due to the lack of the corresponding package. The pm package is actually a perl package. Run the following command to install it:

Yum install-y perl-Time-HiRes

If a "Error Downloading Packages" error occurs during installation, try yum clean all before installing. You may encounter similar problems using other tools in its Percona Toolkit. Just follow the prompts to install the appropriate perl package.

The entire toolkit of Percona Toolkit provides a lot of useful tools. For more information on how to use it, please see the official documentation.

Here are some useful tools for you to choose from.

Pt-query-digest

Pt-query-digest can analyze queries from normal MySQL logs, slow query logs, and binary logs, and even from tcpdump of SHOW PROCESSLIST and MySQL protocols. If no file is specified, it reads data from the standard input stream (STDIN).

The simplest usage is as follows:

Pt-query-digest slow.logs

The output information is roughly as follows:

The whole output is divided into three parts:

1. Overall summary (Overall)

This section is a rough summary (similar to the summary given by loadrunner), through which you can make a preliminary evaluation of the query performance of the current MySQL, such as the maximum value of each metric (max), average value (min), 95% distribution value, median (median), standard deviation (stddev). These metrics include the query execution time (Exec time), the lock occupied time (Lock time), the number of rows that the MySQL executor needs to check (Rows examine), the number of rows finally returned to the client (Rows sent), and the size of the query.

2. Summary information of the query (Profile)

This section provides a list of all the "important" queries (usually slow ones):

Each query has a Query ID, which is calculated by Hash. Pt-query-digest is based on this so-called Fingerprint to group by. For example, the Fingerprint of the following two queries is the same as select * from table1 where column1 =?, and there is also a related tool pt-fingerprint in the toolkit.

Select * from table1 where column1 = 2 select * from table1 where column1 = 3

The ranking of the "statement" in Rank's entire analysis is generally the most common.

Response time and overall percentage of Response time "statements".

Calls the number of times the statement was executed.

The average response time per execution of the R/Call.

The average contrast ratio of the difference in the response time of VBG M.

There is a line of output at the end that shows the statistics of the other two queries that are relatively low and not worth displaying separately.

3. Details

This section lists the details of each query in the Profile table:

It includes the information in Overall, the distribution of query response time, and the reason why the query is "on the list".

Pt-query-digest also has a lot of complex operations, which will not be covered here. For example: query the slowest query in a MySQL from PROCESSLIST:

Pt-query-digest-processlist h=host1

Analyze from tcpdump:

Tcpdump-s 65535-x-nn-Q-tttt-I any-c 1000 port 3306 > mysql.tcp.txtpt-query-digest-- type tcpdump mysql.tcp.txt

Save slow log from one machine to another to be analyzed in detail later:

Pt-query-digest-review h=host2-no-report slow.log

You can also follow some filter conditions. For details, see the official document: http://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html

In addition, combined with some third-party tools can also generate the corresponding reports, you can refer to here: http://biancheng.dnbcw.info/mysql/433514.html

Suggestion: when the slow log is very large, it is best to move the log files to other machines for analysis.

Pt-index-usage

This tool is mainly used to analyze the index usage of the query.

Pt-index-usage slow_query.log-h localhost-password 123456

Detailed usage-help check and then compare with the official website will not be repeated.

Note that to use this tool, MySQL must have a password, and the runtime may not find an error in / var/lib/mysql/mysql.sock, simply link one from / tmp/mysql.sock.

The important thing to note is that pt-index-usage can only analyze slow query logs, so if you want to fully analyze the index usage of all queries, you must set slow_launch_time to 0. Therefore, please use this tool carefully. If you use it online, it is best to analyze it in the early morning, especially when analyzing a large number of logs. It is very CPU consuming.

Overall, this tool is not recommended, and other third-party tools such as mysqlidxchx, userstat and check-unused-keys can be considered to implement similar analysis. Userstat, a patch contributed by Google, is recommended on the Internet.

Oracle can save the execution plan to the performance view, which may be more flexible to analyze, but I haven't found a similar approach in MySQL yet.

Pt-upgrade

This tool is used to check whether the SQL running in the new version returns the same results as the old version, and the best application scenario is when the data is migrated.

Pt-upgrade h=host1 h=host2 slow.logpt-query-advisor

Static query analysis tool. Be able to parse the query log, analyze the query pattern, and then give all the queries that may have potential problems, and give enough detailed suggestions. The 2.2 version of the tool seems to have been removed, probably because the new version has a greater impact on performance.

Summary: the above tools are best not to be used directly online, but should be used as online assistance or offline analysis after failure, and can also be used in performance testing.

SHOW PROFILE

SHOW PROFILE is a contribution from Jeremy Cole, a senior Google architect, to the MySQL community, and it can be used to use the resources that MySQL uses to execute statements. It is off by default, and the following statement needs to be opened and executed:

The command set profiling = 1 Bing # only works in this session.

By executing a simple SHOW PROFILES, you can see the execution time of all queries after profiling is opened.

Execute SHOW PROFILE [TYPE] FOR QUERY Query_ID to see the details of the performance metrics of each step of a query performed by MySQL:

If no FOR QUERY is specified, the details of the most recent query are displayed. TYPE is optional and has the following options:

ALL displays all performance information

BLOCK IO displays the number of block IO operations

CONTEXT SWITCHES displays the number of context changes, whether active or passive

CPU displays user CPU time and system CPU time

IPC shows the number of messages sent and received

MEMORY [not implemented yet]

PAGE FAULTS displays the number of page errors

SOURCE displays the function name and location in the source code

The number of times SWAPS displays SWAP

MySQL will have a lot of steps when executing the query statement, so I won't repeat them here. Just search the Internet when you need it. What needs to be noted in particular is the Sending data step, which gives people the impression that it takes time for MySQL to send data to the client, but it is not. This step includes the process of copying data between various stores in MySQL, such as searching the hard disk.

At this point, the study on "MySQL slow query log example analysis" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report