In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how to solve the production failure caused by Mysql update sql". In the daily operation, I believe that many people have doubts about how to solve the production failure caused by Mysql update sql. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubt of "how to solve the production failure caused by Mysql update sql". Next, please follow the editor to study!
Fault performance
On the one hand: on the cluster management page corresponding to the cloud database PolarDB of Aliyun console, in the one-click diagnostic session management in the diagnosis and optimization module, it is found that a certain update sql has been executed for a very long time and frequently.
On the other hand: in the business monitoring system, there are continuous business execution time to issue alarm information prompts, and the alarm business data continues to rise, some operations affect the use of customers.
Business background
Since the business flows involved in business operations are relatively complex, the sharing of pure technology is not focused on. In order to better understand the cause of the problem, use analogy to describe the complex business as follows: there are 3 tables in the database, the first table is t_grandfather, the second table is t_father (parent table), and the third table is t_grandson (descendant table). The DDL is as follows:
CREATE TABLE `t_grandfather `(`id` int (11) NOT NULL AUTO_INCREMENT, `count` int (11) NOT NULL DEFAULT 0 COMMENT', PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT=' Master Table' CREATE TABLE `t_father `(`id` int (11) NOT NULL AUTO_INCREMENT, `grandfather_ id` int (11) NOT NULL COMMENT 'old watch id', PRIMARY KEY (`id`), KEY `idx_grandfather_ id` (`grandfather_ id`),) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT=' father watch' CREATE TABLE `t _ grandson` (`id` int (11) NOT NULL AUTO_INCREMENT, `idx_grandfather_ id` int (11) NOT NULL COMMENT 'grandfather table id', PRIMARY KEY (`id`), KEY `grandson` (`grandson`),) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT=' grandson table'
The business logic relationship between the three tables is that Mr. becomes a master's watch, and then the master takes a lot of wives (business) and will keep giving birth to a baby. Every child will generate a father's table, and at the same time, it will update the count=count+1 of the old man's watch, indicating that a new offspring has been added. When the master's wife (business) keeps giving birth, the previous baby will also have a wife, and their wife will also have a baby, for the master. That is, it has grandchildren (generating new business data), and after having grandchildren, it also needs to update the count=count+1 of the old table, indicating that a new offspring has been added, and so on, the children and grandchildren are endless (business data is constantly generated) as shown in the following figure:
The logic of the ancestral code is to update t_grandfather as long as there are additions to the t _ father table and t_grandson. This logical design is not a big problem, but considering that the data volume of the grandson table is very large, there will be a very serious performance problem here. The following is part of the pseudo code extracted from the business
/ * handle father business * / public void doFatherBusiness () {/ / do fatherBusiness baba.... Save / / insert t_father table if (fatherMapper.inster (father)) {/ / update t_grandfather set count=count+1 where id= # {grandfatherId} grandfatherMapper.updateCount (father.getGrandfatherId ());}} / * handle grandson business * / public void doGrandsonBusiness () {/ / do grandson baba.... Omit / / insert t_grandson table if (grandsonMapper.inster (grandson)) {/ / update t_grandfather set count=count+1 where id= # {grandfatherId} grandfatherMapper.updateCount (grandson.getGrandfatherId ());}} here
When multiple businesses (threads) call the above methods respectively, it will put great pressure on the update operation of the tgrandfather table, especially in the case of updating the same id, the competition for locks within the mysql server is very fierce. In the end, it shows that it is consistent with the previous background description.
Solution
1. Temporary treatment plan:
On the one hand, on the Aliyun console, the sql is limited, and in a normally blocked session, the kill is forced to drop, so that the data thread does not block and release resources. On the other hand, the service that receives the request is reduced by the number of nodes, in order to reduce the amount of business data entering.
two。 Long-term plan
On the one hand, change the above business logic, insert the t _ grandson table and t_father table, do not update the count field of the t_grandfather table; on the other hand, when you need to use count statistics requirements, all switch to other ways
At this point, the study on "how to solve the production failure caused by Mysql update sql" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.