In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
In this issue, the editor will bring you the case study process of MySQL slow log optimization. The article is rich in content and analyzed and described from a professional point of view. I hope you can get something after reading this article.
Recently, in the analysis of a problem, we have tried a lot of methods, which can be regarded as a process of gradual clarity.
First of all, the phenomenon of the problem is that there are many slow log alarms, which is a set of internal operation and maintenance database, involving two independent databases. We will temporarily call it devopsdb (operation and maintenance management system database) and taskopsdb (task management database).
It is the taskopsdb database that is calling the police now.
Before I start, let me talk about the overall environment and architecture. The database version is 5.7.25, using the MGR schema, and slightly, using the dual-active mode. From a business point of view, devopsdb and taskopsdb data writes are independent, so there is almost no possibility of direct data conflict.
Devopdb writes are real-time, and there are many types of business, while taskopsdb writes are relatively less. From my intuitive key point, the pressure between them is basically 9:1 or more.
Optimize if there is a slow log, but I am a little confused by this slow log report. I can see that 94% of the response time is processing commit requests.
From the overall situation of the slow log, you can see that it comes from two clients.
Let's directly take a look at the SQL related to commit. When we open the slow log file, we can see that the output is basically like this. Since it is a slow log, the number of rows of data affected should be quite obvious, but here we can see that "Rows_examined" and "Rows_sent" are both 0, but it is very straightforward that the execution time of commit is still very long.
At this point, the problem seems to be a dilemma. I want to optimize but suffer from the lack of direct and effective information. After combing through the whole slow log, I began to pay attention to the 5% slow log information. I found that the scanning cost of several tables was too high, which is an optimization point.
After the optimization, it is found that the alarm frequency is indeed much less, but the problem still exists. Every time I receive such an alarm message, it always makes people feel uncomfortable.
So I began to wonder if there were any other ideas and methods.
We start with the alarm, the threshold of the alarm is to count the number of slow logs exceeding 300, so one of the explicit indicators we can start with is 300 slow logs, how to find these 300 slow queries, according to the recent alarm information, we can see that the time of these alarms is relatively fixed, such as 22:00 in the evening or 9:00 in the morning, so the problems occur periodically.
The scheme based on MGR has some characteristics, so let's put it aside for a while. Jiading we don't know about MGR.
The data synchronization between the two nodes is basically a scenario in which a large number of slow logs are generated in a short period of time in taskopsdb, and the performance of these slow logs is commit. The essence of this commit is not that there are a large number of commit operations in taskopsdb, but that harmless commit operations are piled up or blocked for other reasons. So the part of commit is just a facade.
When the data of the other two nodes are synchronized, the DML,DDL of devopsdb will directly affect the load of taskopsdb, which means that the slow logs on devopsdb will not affect the related operations of taskopsdb.
Following this line of thinking, we went on to analyze, and I made a bold attempt in the afternoon, that is, I downgraded from the original MGR mode to asynchronous dual-master mode, and as a result, it was as if the tide had faded, and these slow logs were all on the surface.
This means that the basic slow logs are the two types of slow logs above taskopsdb. After the index is fixed, this problem does not appear, of course, the reflection of this problem is still in progress.
The above is the case analysis process of MySQL slow log optimization shared by the editor. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.