In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how to deal with the slow query alarm in mysql. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.
When doing a basic check after the festival, I found an inconspicuous alarm. The alarm content is roughly as follows:
MySQL has more than on xxxx slow queries per second.
Look at the monitoring data of zabbix and find that there are about 10 slow queries per second. According to this amount, how big a problem this database has to be.
So I think it may be the sql impact caused by doing a full table scan.
This database is a core business, and the load has not been high, did not receive any alarm about IO,CPU, memory, swap, has been running steadily, so this is one of the doubtful points.
Because this library is very stable, it usually checks basic backup, basic space management and daily basic data maintenance, and it doesn't take long to take over, so I seldom pay attention to more content. When I found the corresponding data directory, I found a problem, that is, this slow log file actually has nearly 60GB.
-rw-r--r-- 1 root root 102m Feb 4 17:14 query.log
-rw-rw---- 1 mysql mysql 58G Feb 17 14:58 slow.log
The second doubtful point is that a slow log is so large that it implies how big a performance problem it is.
Of course, such a huge log file, let me see when it was accumulated.
# head-10 slow.log
# Time: 131114 13:41:59
# User@Host: app_new_ Bill [app _ new_bill] @ xxxx_2.107 [xxxx]
# Thread_id: 131044 Schema: mobile_billing Last_errno: 0 Killed: 0
# Query_time: 0.000648 Lock_time: 0.000129 Rows_sent: 28 Rows_examined: 58 Rows_affected: 0 Rows_read: 28
# Bytes_sent: 4235 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0
# InnoDB_trx_id: 1718382
SET timestamp=1384407719
Select app0_.id as id1_, app0_.appname as appname1_, app0_.appkey as appkey1_, app0_.appsecret as appsecret1_, app0_.iplist as iplist1_, app0_.isshow as isshow1_, app0_.flag as flag1_, app0_.test_version as test8_1_, app0_.create_date as create9_1_, app0_.game_type as game10_1_, app0_.callback_url as callback11_1_, app0_.iappay_id as iappay12_1_ App0_.isactivate as isactivate1_ from test_app app0_ where app0_.isshow=1 order by app0_.create_date desc
# Time: 131114 13:42:01
# User@Host: app_new_ Bill [app _ new_bill] @ xxxx_2.107 [xxxx]
It seems that this log has been accumulated since 2013, so it has accumulated to such a large amount over the past few years.
Of course, we need to use the new log file right away, and this file should be used as a backup log. Use mv to change the log name, and then use mysqladmin flush-logs to refresh the log so that the new log content is written to slow.log.
The situation after switching is as follows:
-rw-rw---- 1 mysql mysql 16195105 Feb 17 15:54 slow.log
-rw-rw---- 1 mysql mysql 61757873052 Feb 17 15:02 slow.log.bak
The current configuration of slow queries is a 2-second baseline.
Let's take a look at the sql in the slow log
# InnoDB_trx_id: 1B5249A5
SET timestamp=1455695652
Select * from tb_version_update where appkey='1400140930701' and media_channel_id='2014142002' and take_effect_date < '2016-02-17 1515 5412' and is_take_effect=1 and isshow=1
# User@Host: app_sdk_ sdk_hero [app _ sdk_hero] @ xxxx_128.100 [xxxx]
# Thread_id: 4537955 Schema: mobile_billing Last_errno: 0 Killed: 0
# Query_time: 0.000570 Lock_time: 0.000072 Rows_sent: 0 Rows_examined: 158 Rows_affected: 0 Rows_read: 0
# Bytes_sent: 1753 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0
# InnoDB_trx_id: 1B5249A6
SET timestamp=1455695652
Select * from tb_version_update where appkey='1400140930701' and media_channel_id='2010321003' and take_effect_date < '2016-02-17 1515 5412' and is_take_effect=1 and isshow=1
As can be seen from this log, in fact, the execution time of this query is very short, and it certainly does not meet the trigger condition of slow query, but according to the execution plan, it does not walk the index.
And the key point is that the related table has only more than 150 records, so there is really no need to add indexes, so there is little possibility of performance problems.
At this time, there is a new parameter, which is also learned from colleagues. Log_queries_not_using_indexes
# mysqladmin var | grep index
| | expand_fast_index_creation | OFF |
| | fast_index_creation | ON |
| | innodb_adaptive_hash_index | ON |
| | innodb_adaptive_hash_index_partitions | 8 |
| | log_queries_not_using_indexes | ON |
If the query is not indexed, it will also be recorded in the slow query, so you need to change it to off, set global log_queries_not_using_indexes = OFF.
Then there is no record of this kind of alarm.
For this parameter, the default value is off, so this problem is not usually triggered.
The official explanation for this parameter is as follows:
-- log-queries-not-using-indexes
Command-Line Format--log-queries-not-using-indexesSystem VariableNamelog_queries_not_using_indexesVariable ScopeGlobalDynamic VariableYesPermitted ValuesTypebooleanDefaultOFF
If you are using this option with the slow query log enabled, queries that are expected to retrieve all rows are logged. See Section 5.2.5, "The Slow Query Log". This option does not necessarily mean that no index is used. For example, a query that uses a full index scan uses an index but would be logged because the index would not limit the number of rows.
This is the end of the article on "how to deal with slow query alarm in mysql". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it out for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Basic Table example: Basic Table
© 2024 shulou.com SLNews company. All rights reserved.