Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Find out the real culprit of MySQL disk consumption

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Find out the background of the real culprit of MySQL disk consumption.

Part1: write at the front

What will you face when a single table with 1 billion data is placed in front of you?

Part2: background introduction

To improve the utilization of database resources, in an instance, without affecting each other and ensuring business efficiency, we will put different small businesses under the same large business in one instance. Our disk space is 2T. The alarm threshold is to send an SMS alarm when the disk space is 10%. After receiving an alarm message about the remaining disk space of a business main database, the author found that from a few days ago, the data volume of a table increased very fast, but before that, the decline rate of disk space was relatively slow. There is a large field text in the table, which is updated in large quantities, resulting in rapid disk consumption.

Let's first take a look at the table structure of the table:

Mysql > CREATE TABLE `tablename_ v2` (`id` bigint (20) unsigned NOT NULL AUTO_INCREMENT, `No` varchar (64) NOT NULL DEFAULT'', `Code` varchar (64) NOT NULL DEFAULT'', `log`varchar (64) DEFAULT'', `log1` varchar (64) DEFAULT',. `Phone` varchar (20) DEFAULT'', `createTime` bigint (20) unsigned NOT NULL DEFAULT '0mm, `updateTime` bigint (20) unsigned NOT NULL DEFAULT' 0mm, PRIMARY KEY (`id`), UNIQUE KEY `uk_ mailNo` (`No`, `Code`), KEY `Phone` (`Phone`) ENGINE=InnoDB AUTO_INCREMENT=9794134664 DEFAULT CHARSET=utf8

We know with the business that there are almost no delete operations in the table. Due to the large amount of data, we vaguely use auto_increment as the table quantity estimate to avoid the online impact of the count () operation.

Part3: case study

After communicating with the business, we know that the table can clean up the old data 4 months ago, so it can be cleared by delete, and we can see from the table structure that at the beginning of the design, the table did not create an index on the updateTime column, so it is obviously unreasonable to carry out delete operations in a time range.

After consultation with the business, we determined that id can be deleted as a condition, deleting id delete from tablename_v2 where id delete from tablename_v2 where id > xxx and id < xxx limit 10000.

If you have followed the author's article before, you should know that when you use the delete from xxx where xxx limit syntax, you will change binlog_format from mixed to row, so that binlog will proliferate as a result of its conversion to row format, which is not in line with our expectations.

So the final command is:

Pt-archiver-- sourceh=c3-helei-db01.bj,D=helei,t=tablename_v2,u=sys_admin,p=MANAGER--where'id

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report