Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the advantages of the super powerful slow SQL troubleshooting tool

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "what are the advantages of the super powerful slow SQL troubleshooting tool". In the daily operation, I believe that many people have doubts about the advantages of the super powerful slow SQL troubleshooting tool. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts about "what are the advantages of the super powerful slow SQL troubleshooting tool?" Next, please follow the editor to study!

Open slow query log

We often encounter slow query in the project. When we encounter slow query, we usually open the slow query log, analyze the slow query log, find the slow sql, and then use explain to analyze.

System variable

The system variables related to MySQL and slow query are as follows:

Execute the following statement to see whether slow log is enabled. ON is enabled and OFF is not enabled.

Show variables like "slow_query_log%"

You can see that mine is not enabled, and you can turn on slow queries in the following two ways

Modify the configuration file

Modify the configuration file my.ini by adding the following parameters to the [mysqld] paragraph

[mysqld] log_output='FILE,TABLE' slow_query_log='ON' long_query_time=0.001

"MySQL needs to be restarted to take effect. The command is service mysqld restart"

Set global variables

I execute the following two sentences on the command line to open the slow query log, set the timeout to 0.001s, and record the log to the file and the mysql.slow_log table

Set global slow_query_log = on; set global log_output = 'FILE,TABLE'; set global long_query_time = 0.001

"if you want to get the configuration in the configuration file permanently, otherwise these configurations will become invalid after the database is restarted"

Analyze slow query log

Because the mysql slow query log is like a current account and does not have the function of summarizing statistics, we need to use some tools to analyze it.

Mysqldumpslow

Mysql has built-in mysqldumpslow to help us analyze slow query logs.

Common usage

# take out the 10 most frequently used slow queries mysqldumpslow-s c-t 10 / var/run/mysqld/mysqld-slow.log # take out the 3 slow queries with the slowest query time mysqldumpslow-st-t 3 / var/run/mysqld/mysqld-slow.log # to get the first 10 query statements mysqldumpslow-st-t 10-g "left join" / database/mysql/slow-logpt-query-digest sorted by time.

Pt-query-digest is one of the tools I use most, and it is very powerful. It can analyze binlog, General log, slowlog, show processlist or MySQL protocol data crawled by tcpdump. Pt-query-digest is a perl script that can be executed simply by downloading and authorizing it.

Download and empower

Wget www.percona.com/get/pt-query-digest chmod upright x pt-query-digest ln-s / opt/soft/pt-query-digest / usr/bin/pt-query-digest

Introduction to usage

/ / View how to use pt-query-digest-- help / / use the format pt-query-digest [OPTIONS] [FILES] [DSN]

"commonly used OPTIONS"

-- create-review-table when you use the-- review parameter to output the analysis results to a table, it is automatically created if there is no table. -- create-history-table when you use the-- history parameter to output the analysis results to a table, it is automatically created if there is no table. -- filter matches and filters the input slow query according to the specified string and then analyzes it-- limit limits the percentage or number of output results. The default value is 20, which means the output of the slowest 20 statements. If it is 50%, it is sorted by the proportion of the total response time from the largest to the lowest, and the output reaches 50% of the total position. -- host mysql server address-- user mysql user name-- password mysql user password-- history saves the analysis results to the table, and the analysis results are more detailed. The next time you use-history, if the same statement exists and the time interval of the query is different from that in the history table, it will be recorded in the data table. You can compare the historical changes of a certain type of query by querying the same CHECKSUM. -- review saves the analysis results to the table. This analysis only parameterizes the query conditions, and it is relatively simple to query one record for each type of query. The next time you use-- review, if the same statement parsing exists, it will not be recorded in the data table. -- the output type of output analysis results. The values can be report (standard analysis report), slowlog (Mysql slowlog), json, json-anon. Generally, report is used for ease of reading. -- when does since start to analyze. The value is a string. It can be a specified time point in a "yyyy-mm-dd [hh:mm:ss]" format, or a simple time value: s (seconds), h (hours), m (minutes), d (days). For example, 12 hours means that statistics started 12 hours ago. -- until deadline. With-since, you can analyze slow queries over a period of time.

"commonly used DSN"

A specifies the character set D specifies the connected database P connection database port S connection Socket file h connection database hostname p connection database password t when using-- review or-- history to store the data in which table u connect to the database user name

DSN is configured in the form of key=value; multiple DSN are used, separated

"use example"

# report pt-query-digest slow.log showing the slowest query in slow.log # analyze the query pt-query-digest in the last 12 hours-- since=12h slow.log # analyze the query within the specified range pt-query-digest slow.log-- since '2020-06-20 00 pt-query-digest 0000'-- until '2020-06-25 00 since=12h slow.log 0000' # Save the query in slow.log to the query_ query table pt- Query-digest-user=root-password=root123-review h=localhost Create-review-table slow.log # connected to localhost And read processlist, output to slowlog pt-query-digest-- processlist h=localhost-- user=root-- password=root123-- interval=0.01-- output slowlog # use tcpdump to obtain MySQL protocol data Then generate the slowest query report # tcpdump instructions: https://blog.csdn.net/chinaltx/article/details/87469933 tcpdump-s 65535-x-nn-Q-tttt-I any-c 1000 port 3306 > mysql.tcp.txt pt-query-digest-- type tcpdump mysql.tcp.txt # Analysis binlog mysqlbinlog mysql-bin.000093 > mysql-bin000093.sql pt-query-digest-- type=binlog mysql-bin000093.sql # Analysis general log pt-query-digest-- type=genlog localhost.log usage

"write stored procedures to create data in batches"

There is no performance test in actual work, we often need to modify a large number of data, manual insertion is not possible, at this time we have to use stored procedures

CREATE TABLE `kf_user_ info` (`id`int (11) NOT NULL COMMENT 'user id', `gid` int (11) NOT NULL COMMENT' customer service group id', `name` varchar (25) NOT NULL COMMENT 'customer service name') ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=' customer information table'

How do you define a stored procedure?

CREATE PROCEDURE stored procedure name ([parameter list]) statement END to be executed by BEGIN

For example, insert 100000 pieces of data with an id of 1-100000

Execute with Navicat

-- delete the previously defined DROP PROCEDURE IF EXISTS create_kf;-- start defining CREATE PROCEDURE create_kf (IN loop_times INT) BEGIN DECLARE var INT; SET var = 1; WHILE var < loop_times DO INSERT INTO kf_user_info (`id`, `gid`, `name`) VALUES (var, 1000, var); SET var = var + 1; END WHILE; END;-- call call create_kf (100000)

Three parameter types of stored procedures

Execute with MySQL

You have to define a new Terminator with DELIMITER, because by default SQL takes (;) as the Terminator, so when each SQL in the stored procedure ends, using (;) as the Terminator is tantamount to telling MySQL that this sentence can be executed. But the stored procedure is a whole, we don't want the SQL to be executed one by one, but the stored procedure as a whole, so we need to define a new DELIMITER, the new Terminator can be (/ /) or ($$)

Because the above code should be changed to this way

DELIMITER / / CREATE PROCEDURE create_kf_kfGroup (IN loop_times INT) BEGIN DECLARE var INT; SET var = 1; WHILE var

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report