In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
It's been dragging on for a long time. Take the time to make up for _ (: "∠") _
-text- -
Background: after receiving the memory alarm message, it is found that the memory usage of the MySQL server is increasing continuously from the monitoring.
Attached picture:
Although the restart has been carried out, the memory usage will continue to grow, and the memory will be consumed in about half a month or so.
Scenario: unbuilt scenario, database version 5.7.12
Analysis:
PS: for a long time, screenshots are only used for analysis / examples, not necessarily the data when something goes wrong.
Try direction 1:
The first consideration is whether the parameters related to buffer are set incorrectly. after all, there was a similar problem in crash (http://blog.itpub.net/29510932/viewspace-2123096/)).
Result: there is no obvious problem with the parameter setting.
Try Direction 2:
Since there is nothing wrong with the setting, let's take a look at the memory usage.
Use pmap-d to take a look at the memory of the process; some of the screenshots are as follows
Anon represents the memory actively requested by the process. When counting the machines with problems, it was found that the actively requested memory accounted for 95% of the process memory (of course. Because the buffer is all in here)
Considering that the size of the innodb_buffer_pool is only 50% of the total memory, the extra "requested" memory is a bit too much. Is there any thread that has applied for a large amount of memory that has not been released?
Try direction 2mer-check the thread's memory usage:
MySQL5.7 extends ps (performance_schema) to count more data, including information about mem.
Since it is off by default, these statistics will now be turned on temporarily.
Click (here) to collapse or open
After update performance_schema.setup_instruments set enabled = 'yes' where name like' memory%' executes the above statement, you can see the relevant statistics in the table related to mem in ps, as shown below:
Among them, CURRENT_NUMBER_OF_BYTES_USED can be approximately regarded as the total memory currently occupied.
PS: since this statistic does not distinguish between shared memory, it is possible that the memory footprint is negative, or the sum of items is greater than the actual total memory footprint
Since the thread is suspected, use CURRENT_NUMBER_OF_BYTES_USED in reverse order to query the tables related to Thread. The result is similar to the following figure:
In the instances with problems at that time, the query results combined with ps.thread table data showed that the memory footprint of thread/sql/slave_sql and thread/sql/one_connection (monitor users) was very high.
Try direction 2mura-analyze thread:
Thread/sql/slave_sql is a SQL thread in synchronization, which is responsible for reproducing transactions in the main library binlog. This thread takes up a lot of memory but does not release it. The first reaction is not our own problem.
I looked around on mysql bug and found that someone had previously submitted a similar bug (https://bugs.mysql.com/bug.php?id=71197), status is close)
The official solution is to turn off parallel replication and store rpl-related information in file instead of table
PS: Nice! Then 5.7 if you make a parallel copy, you are cheating yourself. _ (: "∠") _
Thread/sql/one_connection (monitor user) is created by the user and can be found to be a connection maintained by the monitor user. It is mainly used for self-maintained monitoring plug-ins to obtain information.
This can at least be solved, so take a look at the details of the monitor thread:
After checking it, it is found that memory/sql/String::value takes up the most memory.
From the literal meaning, it seems that there is something wrong with the implementation of the SQL, saving a large number of results without release?
I contacted the writer of the plug-in, found the code of the plug-in, looked around carefully, and found that the code did not have close after executing SQL using cursor.
After the code is fix and pushed, the growth rate of memory usage is greatly reduced.
Processing result:
Put this classic problem without close in the internal document as a counterexample.
Then, due to some reasons, the problem that the SQL thread is unable to release the occupied memory cannot be solved, but the growth rate is not fast and it is still within an acceptable range, so we are ready to maintain (restart) regularly for the time being.
PS: so far, the number of individual instances with problems has not grown to a very high level. Visual inspection takes more than two months to maintain (restart) once.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.