In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
Today, I would like to talk to you about the reasons why it is not recommended to use Delete to delete data in MySQL. Many people may not know much about it. In order to make you understand better, the editor has summarized the following for you. I hope you can get something according to this article.
InnoDB storage architecture
As can be seen from this diagram, the InnoDB storage structure mainly consists of two parts: logical storage structure and physical storage structure.
Logically, it is composed of tablespace tablespace-> segment segment or inode-> zone Extent-- > data page Page. The logical management unit of Innodb is segment, and the smallest unit of space allocation is extent. Each segment allocates 32 page from tablespace FREE_PAGE. When the 32 page is insufficient, it will be extended according to the following principle: if the current extent is less than 1 extent;, when the tablespace is less than 32MB, extend one extent at a time The tablespace is larger than 32MB, extending 4 extent at a time.
Physically, it is mainly composed of system user data files and log files, the data files mainly store MySQL dictionary data and user data, and the log files record the changes of data page, which are used for MySQL Crash recovery.
Innodb tablespace
InnoDB storage includes three types of tablespaces: system tablespaces, user tablespaces and Undo tablespaces.
System tablespace: mainly stores data dictionary data within MySQL, such as data under information_schema.
User tablespace: when innodb_file_per_table=1 is turned on, the data table is stored independently from the system tablespace in the data file with the table_name.ibd command, and the structure information is stored in the table_name.frm file.
Undo tablespaces: store Undo information, such as snapshot consistent reads and flashback, using undo information.
Starting with MySQL 8.0, users are allowed to customize tablespaces, with the following syntax:
CREATE TABLESPACE tablespace_name ADD DATAFILE 'file_name' # data file name USE LOGFILE GROUP logfile_group # Custom log filegroup, usually 2 logfile per group. [EXTENT_SIZE [=] extent_size] # Zone size [INITIAL_SIZE [=] initial_size] # initialization size [AUTOEXTEND_SIZE [=] autoextend_size] # automatic widening size [MAX_SIZE [=] max_size] # maximum size of a single file, the maximum is 32G. [NODEGROUP [=] nodegroup_id] # Node group [WAIT] [COMMENT [=] comment_text] ENGINE [=] engine_name
This advantage is that you can separate hot and cold data, using HDD and SSD to store data, which can not only achieve efficient data access, but also save costs. For example, you can add two 500G hard disks, create a volume group vg, divide a logical volume lv, create a data directory and mount the corresponding lv, assuming that the two directories are / hot_data and / cold_data respectively.
In this way, the core business tables such as user tables and order tables can be stored on a high-performance SSD disk, and some logs and pipelining tables can be stored on a normal HDD. The main steps are as follows:
# create hot data tablespace create tablespace tbs_data_hot add datafile'/ hot_data/tbs_data_hot01.dbf' max_size 20G; # create core business tables to be stored in hot data tablespace create table booking (id bigint not null primary key auto_increment, … ) tablespace tbs_data_hot; # create cold data tablespace create tablespace tbs_data_cold add datafile'/ hot_data/tbs_data_cold01.dbf' max_size 20G; # create log, pipelined, backup type tables stored in cold data tablespace create table payment_log (id bigint not null primary key auto_increment,... ) tablespace tbs_data_cold; # can move tables to another tablespace alter table payment_log tablespace tbs_data_hot
Inndob storage distribution
Create an empty table to view spatial changes
Mysql > create table user (id bigint not null primary key auto_increment,-> name varchar (20) not null default''comment' name',-> age tinyint not null default 0 comment 'age',-> gender char (1) not null default' M' comment 'gender',-> phone varchar (16) not null default''comment' Mobile number',-> create_time datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'creation time' -> update_time datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'modification time'->) engine = InnoDB DEFAULT CHARSET=utf8mb4 COMMENT 'user information table' Query OK, 0 rows affected (0.26 sec) # ls-lh user1.ibd-rw-r- 1 mysql mysql 96K Nov 6 12:48 user.ibd
When you set the parameter innodb_file_per_table=1, a segment is automatically created when the table is created, and an extent containing 32 data page is assigned to store data. The default size of the created empty table is 96KB, and 64 connection pages will be applied for after the extent is used. In this way, for some small tables, or undo segment, you can apply for less space at the beginning to save disk capacity.
# python2 py_innodb_page_info.py-v / data2/mysql/test/user.ibd page offset 00000000, page type page offset 00000001, page type page offset 00000002, page type page offset 00000003, page type, page level page offset 00000000, page type page offset 00000000 Page type Total number of page: 6: # Total number of pages allocated Freshly Allocated Page: 2 # available data page Insert Buffer Bitmap: 1 # insert buffer page File Space Header: 1 # file space header B-tree Node: 1 # data page File Segment inode: 1 # file side inonde If it is on ibdata1.ibd, there will be multiple inode.
Spatial changes after inserting data
Mysql > DELIMITER $$mysql > CREATE PROCEDURE insert_user_data (num INTEGER)-> BEGIN-> DECLARE Veteri int unsigned DEFAULT 0;-> set autocommit= 0;-> WHILE Veteri
< num DO ->Insert into user (`name`, age, gender, phone) values (CONCAT ('lyn',v_i), mod),' Maureen, CONCAT ('152 RAND (1) * 100000000));-> SET vomeri = vSecretiimper1;-> END WHILE;-> commit;-> END $Query OK, 0 rows affected (sec) mysql > DELIMITER; # insert 10w data mysql > call insert_user_data (100000) Query OK, 0 rows affected # ls-lh user.ibd-rw-r- 1 mysql mysql 14m Nov 6 10:58 / data2/mysql/test/user.ibd # python2 py_innodb_page_info.py-v / data2/mysql/test/user.ibd page offset 00000000, page type page offset 00000001, page type page offset 00000002, page type page offset 00000003, page type, page level # added a non-leaf node The height of the tree changed from 1 to 2. Page offset 00000000, page type Total number of page: 896: Freshly Allocated Page: 493 Insert Buffer Bitmap: 1 File Space Header: 1 B-tree Node: 400 File Segment inode: 1
Spatial change after delete data
Mysql > select min (id), max (id), count (*) from user +-+ | min (id) | max (id) | count (*) | +-+ | 1 | 100000 | 100000 | +- -- + 1 row in set (0.05 sec) # Delete 50000 pieces of data Theoretically, the space should grow from 14MB to 7MB or so. Mysql > delete from user limit 50000; Query OK, 50000 rows affected (.25 sec) # the data file size is still 14MB, not reduced. # ls-lh / data2/mysql/test/user1.ibd-rw-r- 1 mysql mysql 14m Nov 6 13:22 / data2/mysql/test/user.ibd # data page was not reclaimed. # python2 py_innodb_page_info.py-v / data2/mysql/test/user.ibd page offset 00000000, page type page offset 00000001, page type page offset 00000002, page type page offset 00000003, page type, page level... Page offset 00000000, page type Total number of page: 896: Freshly Allocated Page: 493Insert Buffer Bitmap: 1 File Space Header: 1 B-tree Node: 400File Segment inode: 1 # inside MySQL is tag deletion, mysql > use information_schema Database changed mysql > SELECT A.SPACE AS TBL_SPACEID, A.TABLE_ID, A.NAME AS TABLE_NAME, FILE_FORMAT, ROW_FORMAT, SPACE_TYPE, B.INDEX_ID, B.NAME AS INDEX_NAME, PAGE_NO, B.TYPE AS INDEX_TYPE FROM INNODB_SYS_TABLES A LEFT JOIN INNODB_SYS_INDEXES B ON A.TABLE_ID = B.TABLE_ID WHERE A.NAME = 'test/user1' +- -+ | TBL_SPACEID | TABLE_ID | TABLE_NAME | FILE_FORMAT | ROW_FORMAT | SPACE_TYPE | INDEX_ID | INDEX_NAME | PAGE_NO | INDEX_TYPE | +-+- -+ | 1283 | 1207 | test/user | Barracuda | Dynamic | Single | 2236 | PRIMARY | 3 | 3 | +-- +-+ 1 row in set (0. 01 sec) PAGE_NO = 3 the root page identifying B-tree is page 3 INDEX_TYPE = 3 is a clustered index. The values of INDEX_TYPE are as follows: 0 = nonunique secondary index; 1 = automatically generated clustered index (GEN_CLUST_INDEX); 2 = unique nonclustered index; 3 = clustered index; 32 = full-text index; # shrink the space and then observe
MySQL internal will not really delete space, and do mark deletion, that is, delflag:N modified to delflag:Y,commit will be purge into the delete linked list, if the next time insert larger records, the space after delete will not be reused, if the inserted records less than or equal to delete records will be reused, this piece of content can be analyzed through the innblock tool of knowledge Hall.
Fragments in Innodb
The generation of fragments
We know that when data is stored on the file system, the physical space allocated to it cannot always be used 100%. Deleting data will leave some "holes" on the page, or random writing (non-linear increase in clustered index) will lead to page splitting. Page splitting results in less than 50% of the page utilization space, and adding, deleting, deleting and modifying the table will cause random additions and deletions of the corresponding secondary index values. It will also lead to some "holes" in the data pages in the index structure, which may be reused, but will eventually lead to some physical space being unused, that is, fragments.
At the same time, even if the fill factor of 100% leaves Innodb is set to 100%, the space of page page 1x16 will be reserved as a reserve (An innodb_fill_factor setting of 100 leaves 1ip 16 Innodb) to prevent row overflow caused by update.
Mysql > select table_schema,-> table_name,ENGINE,-> round (DATA_LENGTH/1024/1024+ INDEX_LENGTH/1024/1024) total_mb,TABLE_ROWS,-> round (DATA_LENGTH/1024/1024) data_mb, round (INDEX_LENGTH/1024/1024) index_mb, round (DATA_FREE/1024/1024) free_mb Round (DATA_FREE/DATA_LENGTH*100,2) free_ratio-> from information_schema.TABLES where TABLE_SCHEMA= 'test'-> and TABLE_NAME=' user' +-+ | table_schema | table_name | ENGINE | total_mb | | TABLE_ROWS | data_mb | index_mb | free_mb | free_ratio | +-| -+ | test | user | InnoDB | 4 | 50000 | 4 | 0 | 6 | 149.42 | +- -+ 1 row in set (0.00 sec)
Where data_free allocates the number of unused bytes, it does not mean that it is completely fragmented.
Recovery of fragments
For InnoDB tables, you can use the following command to recover fragments and free space. This is a random read IO operation, which is time-consuming and will block the normal DML operation on the table. At the same time, it needs to occupy more disk space. For RDS, it may cause the disk space to burst up instantly, the instance is locked instantly, and the application cannot do DML operation, so the online environment is prohibited.
# Defragment recovery mysql > alter table user engine=InnoDB; Query OK for InnoDB, 0 rows affected (9.00 sec) Records: 0 Duplicates: 0 Warnings: 0 # # after execution, the data file size is reduced from 14MB to 10m. # ls-lh / data2/mysql/test/user1.ibd-rw-r- 1 mysql mysql 10m Nov 6 16:18 / data2/mysql/test/user.ibdmysql > select table_schema, table_name,ENGINE, round (DATA_LENGTH/1024/1024+ INDEX_LENGTH/1024/1024) total_mb,TABLE_ROWS, round (DATA_LENGTH/1024/1024) data_mb, round (INDEX_LENGTH/1024/1024) index_mb Round (DATA_FREE/1024/1024) free_mb, round (DATA_FREE/DATA_LENGTH*100,2) free_ratio from information_schema.TABLES where TABLE_SCHEMA= 'test' and TABLE_NAME=' user' +-+ | table_schema | table_name | ENGINE | total_mb | | TABLE_ROWS | data_mb | index_mb | free_mb | free_ratio | +-| -+ | test | user | InnoDB | 5 | 50000 | 5 | 0 | 2 | 44.29 | +- -+ 1 row in set (0.00 sec)
The influence of delete on SQL
SQL execution before deletion
# insert 100W data mysql > call insert_user_data (1000000); Query OK, 0 rows affected (35.99 sec) # add related indexes mysql > alter table user add index idx_name (name), add index idx_phone (phone); Query OK, 0 rows affected (6.00 sec) Records: 0 Duplicates: 0 Warnings: 0 # Index Statistics mysql > show index from user + -+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +- -+ | user | 0 | PRIMARY | 1 | id | A | 996757 | NULL | NULL | | BTREE | user | 1 | idx_name | 1 | name | A | 996757 | NULL | NULL | | BTREE | user | | 1 | idx_phone | 1 | phone | A | 2 | NULL | NULL | | BTREE | +- -+ 3 rows in set (0.00 sec) # reset state variable count mysql > flush status Query OK, 0 rows affected (0.00 sec) # execute SQL statement mysql > select id, age, phone from user where name like 'lyn12%' +-+ | id | age | phone | +-+ | 124 | 3 | 15240540354 | 1231 | 30 | 15240540354 | | 12301 | 60 | 15240540354 |. | | 129998 | 37 | 15240540354 | | 129999 | 38 | 15240540354 | | 130000 | 39 | 15240540354 | +-+ 11111 rows in set (0.03 sec) mysql > explain select id, age, phone from user where name like 'lyn12%' | +-+-- + | id | select_type | table | | type | possible_keys | key | key_len | ref | rows | Extra | + -+ | 1 | SIMPLE | user | range | idx_name | idx_name | 82 | NULL | 22226 | Using index condition | +-+ -+-+ 1 row in set (0.00 sec) # check the relevant status variables mysql > select * from information_schema.session_status where variable_name in ('Last_query_cost' 'Handler_read_next','Innodb_pages_read','Innodb_data_reads','Innodb_pages_read') +-+-+ | VARIABLE_NAME | VARIABLE_VALUE | +-+-+ | HANDLER_READ_NEXT | 11111 | # number of rows requested to be read | INNODB_DATA_READS | 7868409 | | # Total number of data physical reads | INNODB_PAGES_READ | 7855239 | # Total number of logical reads | LAST_QUERY_COST | 10.499000 | # SQL statement cost COST | Mainly include IO_COST and CPU_COST. +-+-+ 4 rows in set (0.00 sec)
SQL execution after deletion
# Delete 50w data mysql > delete from user limit 500000; Query OK, 500000 rows affected (3.70sec) # Analysis table statistics mysql > analyze table user +-+ | Table | Op | Msg_type | Msg_text | +-+ | test.user | analyze | status | OK | | +-+ 1 row in set (0.01 sec) # reset state variable count mysql > flush status | Query OK, 0 rows affected (0.01sec) mysql > select id, age, phone from user where name like 'lyn12%'; Empty set (0.05sec) mysql > explain select id, age, phone from user where name like' lyn12%' +-+-- + | id | select_type | table | | type | possible_keys | key | key_len | ref | rows | Extra | + -+ | 1 | SIMPLE | user | range | idx_name | idx_name | 82 | NULL | 22226 | Using index condition | +-+ -+-+ 1 row in set (0.00 sec) mysql > select * from information_schema.session_status where variable_name in ('Last_query_cost' 'Handler_read_next','Innodb_pages_read','Innodb_data_reads','Innodb_pages_read') +-+-+ | VARIABLE_NAME | VARIABLE_VALUE | +-+-+ | HANDLER_READ_NEXT | 0 | INNODB_DATA_READS | 7868409 | INNODB_PAGES_READ | | 7855239 | | LAST_QUERY_COST | 10.499000 | +-+-+ 4 rows in set (10.499000 sec) |
Statistical analysis of results
Operation COST physical reads logical reads scan rows return rows execution time initialization insert 100W10.49900078684097855239222261111130ms100W random delete 50W10.4990007868409785523922226050ms
This also shows that for ordinary large tables, it is not realistic to use delete data to reduce the size of the table, so do not use delete to delete data at any time, you should use elegant tags to delete.
Delete optimization recommendations
Control the permissions of business accounts
For a large system, it is necessary to disassemble the sub-system according to the business characteristics. Each subsystem can be regarded as a service, such as Meituan APP, which has many services. The core services are user service user-service, search service search-service, commodity product-service, location service location-service, price service price-service and so on. Each service corresponds to a database, and a separate account is created for the database, and only DML permission and no delete permission are granted, and cross-database access is prohibited.
# create a user database and authorize create database mt_user charset utf8mb4; grant USAGE, SELECT, INSERT, UPDATE ON mt_user.* to'wishusernames, licenses, licenses% 'identified by'tusers, clients gaHTGi123456users; flush privileges
Delete changed to mark deletion
There are four public fields in the MySQL database modeling specification, which are basically necessary for each table, while creating an index in the create_time column has two benefits:
Some query business scenarios have a default period of time, such as 7 days or a month, which is filtered through create_time, and it is faster to scan by index.
Some core business tables need to extract T + 1 from the data warehouse, such as the data extracted at 00:30 every night from the day before, all of which are filtered by create_time.
`id` bigint (20) NOT NULL AUTO_INCREMENT COMMENT 'primary key id', `is_ deleted` tinyint (4) whether to logically delete: 0: not deleted, 1: deleted', `create_ time`timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'creation time', `update_ time`timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'modification time' # with the deletion mark, the delete operation of the business interface can be converted to update update user set is_deleted = 1 where user_id = 1213 # filter select id, age, phone from user where is_deleted = 0 and name like 'lyn12%' with is_deleted when querying
Data archiving mode
General data archiving method
# 1. Create an archive table, usually adding _ bak after the original table name. CREATE TABLE `ota_order_ bak` (`id`bigint (11) NOT NULL AUTO_INCREMENT COMMENT 'key', `hotel_ id`varchar 'DEFAULT NULL COMMENT' order id', `ota_ id`varchar (255) DEFAULT NULL COMMENT 'ota', `check_in_ date`varchar' DEFAULT NULL COMMENT 'check-in date, `check_out_ date`varchar' DEFAULT NULL COMMENT 'departure date, `hotel_ id`varchar (255) DEFAULT NULL COMMENT' Hotel ID', `guest_ name` varchar (255) DEFAULT NULL COMMENT 'customer' `time of purchase', `create_ time` datetime DEFAULT NULL, `update_ time` datetime DEFAULT NULL, `create_ user` varchar (255i) DEFAULT NULL, `update_ user`varchar (255i) DEFAULT NULL, `status`int (4) DEFAULT'1' COMMENT 'status: 1 normal Delete', `price` varchar (varchar) DEFAULT NULL, `price` decimal (10Power0) DEFAULT NULL, `remark` longtext, PRIMARY KEY (`id`), KEY `IDX_order_ id` (`order_ id`) USING BTREE, KEY `hotel_ name` (`hotel_ name`) USING BTREE, KEY `ota_ id` (`ota_ id`) USING BTREE, KEY `IDX_purcharse_ time` (`purcharse_ time`) USING BTREE KEY `IDX_create_ time` (`create_ time`) USING BTREE) ENGINE=InnoDB DEFAULT CHARSET=utf8 PARTITION BY RANGE (to_days (create_time)) (PARTITION p201808 VALUES LESS THAN (to_days ('2018-09-01')), PARTITION p201809 VALUES LESS THAN (to_days ('2018-10-01')), PARTITION p201810 VALUES LESS THAN (to_days ('2018-11-01')), PARTITION p201811 VALUES LESS THAN (to_days ('2018-12-01')) PARTITION p201812 VALUES LESS THAN (to_days ('2019-01-01')), PARTITION p201901 VALUES LESS THAN (to_days ('2019-02-01')), PARTITION p201902 VALUES LESS THAN (to_days ('2019-03-01')), PARTITION p201903 VALUES LESS THAN (to_days ('2019-04-01')), PARTITION p201904 VALUES LESS THAN (to_days ('2019-05-01')), PARTITION p201905 VALUES LESS THAN (to_days ('2019-06-01')) PARTITION p201906 VALUES LESS THAN (to_days ('2019-07-01')), PARTITION p201907 VALUES LESS THAN (to_days ('2019-08-01')), PARTITION p201908 VALUES LESS THAN (to_days ('2019-09-01')), PARTITION p201909 VALUES LESS THAN (to_days ('2019-10-01')), PARTITION p201910 VALUES LESS THAN (to_days ('2019-11-01')), PARTITION p201911 VALUES LESS THAN (to_days ('2019-12-01')) PARTITION p201912 VALUES LESS THAN (to_days ('2020-01-01')) # 2. Insert invalid data in the original table (you need to confirm the data retention range with the developer) create table tbl_p201808 as select * from ota_order where create_time between '2018-08-01 00 and' 2018-08-31 23 and 59 range; # 3. Exchange alter table ota_order_bak exchange partition p201808 with table tbl_p201808; # 4 with the archive table partition. Delete the data already specified in the original table delete from ota_order where create_time between '2018-08-01 00limit' 2018-08-31 2323 limit 59' limit 3000
Optimized filing method
# 1. Create an intermediate table CREATE TABLE `ota_order_ 2020` (.) ENGINE=InnoDB DEFAULT CHARSET=utf8 PARTITION BY RANGE (to_days (create_time)) (PARTITION p201808 VALUES LESS THAN (to_days ('2018-09-01')), PARTITION p201809 VALUES LESS THAN (to_days ('2018-10-01')), PARTITION p201810 VALUES LESS THAN (to_days ('2018-11-01')), PARTITION p201811 VALUES LESS THAN (to_days ('2019-12-01')), PARTITION p201812 VALUES LESS THAN (to_days ('2019-01-01')) PARTITION p201901 VALUES LESS THAN (to_days ('2019-02-01')), PARTITION p201902 VALUES LESS THAN (to_days ('2019-03-01')), PARTITION p201903 VALUES LESS THAN (to_days ('2019-04-01')), PARTITION p201904 VALUES LESS THAN (to_days ('2019-05-01')), PARTITION p201905 VALUES LESS THAN (to_days ('2019-06-01')), PARTITION p201906 VALUES LESS THAN (to_days ('2019-07-01')) PARTITION p201907 VALUES LESS THAN (to_days ('2019-08-01')), PARTITION p201908 VALUES LESS THAN (to_days ('2019-09-01')), PARTITION p201909 VALUES LESS THAN (to_days ('2019-10-01')), PARTITION p201910 VALUES LESS THAN (to_days ('2019-11-01')), PARTITION p201911 VALUES LESS THAN (to_days ('2019-12-01')), PARTITION p201912 VALUES LESS THAN (to_days ('2019-01-01')) # 2. Insert valid data in the original table. If the amount of data is about 100W, you can insert it directly during the business trough. If it is relatively large, it is recommended to use dataX to control the frequency and size. Previously, I encapsulated dataX in Go to automatically generate json files and customize the size to execute. Insert into ota_order_2020 select * from ota_order where create_time between '2020-08-01 00 and' 2020-08-31 23 Table rename alter table ota_order rename to ota_order_bak; alter table ota_order_2020 rename to ota_order; # 4. Insert differential data insert into ota_order select * from ota_order_bak a where not exists (select 1 from ota_order b where a.id = b.id); # 5. Ota_order_bak is transformed into a partition table. If the table is relatively large, it is not recommended to directly modify it. You can first create a partition table and import it through dataX. # 6. Subsequent archiving method # create intermediate universal table create table ota_order_mid like ota_order; # exchange original table invalid data partition to common table alter table ota_order exchange partition p201808 with table ota_order_mid; # # exchange common table data to the corresponding partition alter table ota_order_bak exchange partition p201808 with table ota_order_mid of archive table
In this way, both the original table and the archived table are partitioned on a monthly basis. You only need to create an intermediate ordinary table and do two partition exchanges during the business trough, which can not only delete invalid data, but also recover empty data, and there is no space debris, which will not affect the index on the table and the implementation plan of SQL.
Summary
From the InnoDB storage space distribution, the impact of delete on performance can be seen that delete physical deletion can not release disk space, but also produce a large number of fragments, resulting in frequent index splitting, affecting the stability of the SQL execution plan.
At the same time, when collecting fragments, it will consume a lot of CPU and disk space, which will affect the normal DML operation on the table.
At the business code level, logical tag deletion should be done to avoid physical deletion. In order to meet the data archiving requirements, it can be implemented by using MySQL partition table features, which are all DDL operations and no fragments are generated.
Another better solution is Clickhouse, which can store data tables with life cycle using Clickhouse, and use its TTL feature to automatically clean up invalid data.
After reading the above, do you have any further understanding of the reasons why Delete is not recommended to delete data in MySQL? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.