In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
What is pt-online-schema-change? in view of this question, this article introduces in detail the corresponding analysis and answers, hoping to help more partners who want to solve this problem to find a more simple and feasible way.
I. A brief analysis of the principle of pt-online-schema-change
First of all, it is important to note that the pt-online-schema-change tool does not mean that the table structure is modified without locks. Generally speaking, locks include innodb row locks and MDL lock. The pt-online-schema-change tool replaces some DDL operations using the COPY algorithm with DML operations, in other words, using Innodb row locks instead of MDL lock, because the DDL of MySQL's native COPY algorithm completes the entire table replication operation under the protection of MDL lock SNW, and the DML operation is not allowed in the whole replication process, which leads to the phenomenon that the DDL of our COPY algorithm blocks threads normally. Of course, which DDL can be carried out by online can refer to the official documentation online ddl section. Throughout the pt-online-schema-change tool modification process, the MDL LOCK X lock is only applied during the rename phase, but the rename operation is generally very fast.
Let's take a look at how pt-online-schema-change works. In fact, when you turn on genrnal log, you can see that the following are the key steps (my table name is testpt_osc):
First define a new table
CREATE TABLE test._testpt_osc_new (
Id int (11) NOT NULL
Name varchar (20) DEFAULT NULL
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
ALTER TABLE test._testpt_osc_new add index name_index (name)
Define three triggers
Delete trigger:
CREATE TRIGGER pt_osc_test_testpt_osc_del
AFTER DELETE ON test.testpt_osc
FOR EACH ROW
DELETE IGNORE FROM test._testpt_osc_new
WHERE test._testpt_osc_new.id OLD.id
Update trigger:
CREATE TRIGGER pt_osc_test_testpt_osc_upd
AFTER UPDATE ON test.testpt_osc
FOR EACH ROW
BEGIN
DELETE IGNORE FROM test._testpt_osc_new
WHERE! (OLD.id NEW.id)
AND test._testpt_osc_new.id OLD.id
REPLACE INTO test._testpt_osc_new (id, name) VALUES
(NEW.id, NEW.name)
END
Insert trigger:
CREATE TRIGGER pt_osc_test_testpt_osc_ins
AFTER INSERT ON test.testpt_osc
FOR EACH ROW
REPLACE INTO test._testpt_osc_new (id, name) VALUES
(NEW.id, NEW.name)
Using chunk copy method
First of all, you need to insert the confirmation upper bound of the data:
SELECT /! 40001 SQL_NO_CACHE / id FROM test.testpt_osc FORCE INDEX (PRIMARY)
WHERE ((id > ='1')) ORDERBY id LIMIT 1999, 2 / next chunk boundary/
Then insert:
INSERT LOW_PRIORITY IGNORE INTO test._testpt_osc_new (id, name)
SELECT id, name FROM test.testpt_osc FORCE INDEX (PRIMARY)
WHERE ((id > ='1') AND ((id row_ins 971 Troup3: | row_ins: table: test/t2 inserts data into the T2 table. 1406 Troup3: | | > row_ins 1407 Troup3: | row_ins: table: test/t1 inserts data into the T1 table. 1779 Thum3: | | | | > row_ins 1780 Troup3: | row_ins: table: test/t3 inserts data into T3 table.
You can see the order here. Secondly, we also need to know that all these operations will be wrapped in a transaction, which can also be verified by function call trace or binlog. Here is the binlog information of a call:
# at 194( here is the start of the GTID EVENT transaction) # 200212 17:23:16 server id 1903313 end_log_pos 259 CRC32 0x4ff6735e GTID last_committed=0 sequence_number=1 rbrachionlyzed accounts / set @ @ SESSION.GTID_NEXT= '92008a52-4b7d-11ea-9ec6-000c29c8aca8 # at 259 December 200212 17:23:16 server id 1903313 end_log_pos 331 CRC32 0x1ebd3446 Query thread_id=3 exec_time=0 errorworthy codewords 0. Begin _ balance # at 331 years 200212 17:23:16 server id 1903313 end_log_pos 384 CRC32 0xe748dc3a Rows_query# INSERT INTO T2 values (new.id) # at 384 years 200212 17:23:16 server id 1903313 end_log_pos 429 CRC32 0x093c5fe3 Table_map: `test`.`t1` mapped to number 10 examples at 429,200212 17:23:16 server id 1903313 end_log_pos 474CRC32 0x92691238 Table_map: `test`.`t2` mapped to number 11percent at 47419212 server id 1903313 end_log_pos 519 CRC32 0x5b9a710f Table_map: `test`.`t3` mapped to number 11examples at 519,200212 17:23:16 server id 1903313 end_log_pos 559 CRC32 0xe41b1119 Write_rows: table id 11examples at 559200212 17:23:16 server id 1903313 end_log_pos 599CRC32 0x36c3511c Write_rows: table id 10 at 599 200212 17:23:16 server id 1903313 end_log_pos 639 CRC32 0xa68b9ae6 Write_rows: table id 111flags: STMT_END_F### INSERT INTO `test`.`t2` # # SET # @ 1INSERT INTO 11000 / * INT meta=0 nullable=0 is_null=0 * / # INSERT INTO `test`.`t1` # # SET### @ 116000 / * INT meta=0 nullable=1 is_null=0 * / # INSERT INTO `test`.`t3` # SET### @ 116000 / * INT meta=0 nullable=0 is_null=0 * / # at 639 (here is XID EVENT transaction submission) # 200212 server id 1903313 end_log_pos 670 CRC32 0xbbb6547b Xid = 19COMMITBank transaction submission
Here we use binlog not only to verify the order of execution, but also to verify that all operations are contained in a transaction. Since all statements are wrapped in a transaction, the scope of locking is even greater, which is related not only to its own DML operation table, but also to the related table that triggers the statement, which requires extra attention.
Secondly, all statements are not only wrapped in a transaction, but also share an error return interface, so the error is as follows:
Mysql > select count (*) from T1 values + | count (*) | +-+ | 0 | +-+ 1 row in set (0.00 sec) mysql > insert into T1 values (1); ERROR 1062 (23000): Duplicate entry'1' for key 'PRIMARY'mysql >
We were surprised to find that the T1 table did not have any data, but returned duplicate rows. The reason is that although the T1 table has no data, the T2 or T3 table may violate the uniqueness check, so an error is returned and the error is returned to the client through a unified interface.
Finally, triggers will lead to processing logic confusion, try to avoid the use of triggers.
III. Insert ignore/replace grammar
With regard to ignore syntax, let's take insert ignore syntax as an example. Generally speaking, if you encounter duplicate lines, insert ignore grammar will skip it by ignoring duplicate value errors, which is actually consistent with the way replace handles it. But replace is different in that if you encounter duplicate lines, instead of ignoring them, you will execute the delete and then perform the insert operation. In other words, their trigger form is the same, but the behavior that is executed after the trigger is different, let's take a look.
First of all, for the insert statement, we need to navigate to the location where we need to insert, which is skipped.
1. Perform duplicate line judgment operation
This step requires determining whether there are already duplicate rows for the primary key / unique index. Basically, the judgment standard is to locate the index by the inserted value, and then determine whether the value of the positioning cursor is the same as the value to be inserted. Here is the stack frame:
Primary key:
# 0 row_ins_duplicate_error_in_clust (flags=0, cursor=0x7fffec4347d0, entry=0x7367c00, thr=0x7362980, mtr=0x7fffec433fa0) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:2273#1 0x0000000001ae47d4 in row_ins_clust_index_entry_low (flags=0, mode=2, index=0x73674f0, n_uniq=1, entry=0x7367c00, n_ext=0, thr=0x7362980, dup_chk_only=false) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:2555#2 0x0000000001ae697a in row_ins_clust_index_entry (index=0x73674f0, entry=0x7367c00 Thr=0x7362980, n_ext=0, dup_chk_only=false) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3293#3 0x0000000001ae6e88 in row_ins_index_entry (index=0x73674f0, entry=0x7367c00, thr=0x7362980) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3429#4 0x0000000001ae73e2 in row_ins_index_entry_step (node=0x7362710, thr=0x7362980) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3579#5 0x0000000001ae7749 in row_ins (node=0x7362710 Thr=0x7362980) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3717#6 0x0000000001ae7bae in row_ins_step (thr=0x7362980) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3853#7 0x0000000001b05f73 in row_insert_for_mysql_using_ins_graph (mysql_rec=0x736e7a0 "\ 375002" Prebuilt=0x7362170) at / mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1738#8 0x0000000001b06484 in row_insert_for_mysql (mysql_rec=0x736e7a0 "\ 375002", prebuilt=0x7362170) at / mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1859#9 0x00000000019adca9 in ha_innobase::write_row (this=0x736e4b0 Record=0x736e7a0 "\ 375002") at / mysql/mysql-5.7.26/storage/innobase/handler/ha_innodb.cc:7598#10 0x0000000000f67dc0 in handler::ha_write_row (this=0x736e4b0, buf=0x736e7a0 "\ 375002") at / mysql/mysql-5.7.26/sql/handler.cc:8062#11 0x00000000017cf173 in write_record (thd=0x6dd5660, table=0x735afa0, info=0x7fffec435b50, update=0x7fffec435ad0) at / mysql/mysql-5.7.26/sql/sql_insert.cc:1873#12 0x00000000017cc24a in Sql_cmd_insert::mysql_insert (this=0x6debbc8, thd=0x6dd5660 Table_list=0x6deb638) at / mysql/mysql-5.7.26/sql/sql_insert.cc:769#13 0x00000000017d2d7f in Sql_cmd_insert::execute (this=0x6debbc8, thd=0x6dd5660) at / mysql/mysql-5.7.26/sql/sql_insert.cc:3118#14 0x000000000159a70c in mysql_execute_command (thd=0x6dd5660, first_level=true) at / mysql/mysql-5.7.26/sql/sql_parse.cc:3596#15 0x00000000015a06c0 in mysql_parse (thd=0x6dd5660 Parser_state=0x7fffec437610) at / mysql/mysql-5.7.26/sql/sql_parse.cc:5570#16 0x0000000001595283 in dispatch_command (thd=0x6dd5660, com_data=0x7fffec437d80 Command=COM_QUERY) at / mysql/mysql-5.7.26/sql/sql_parse.cc:1484#17 0x00000000015940bc in do_command (thd=0x6dd5660) at / mysql/mysql-5.7.26/sql/sql_parse.cc:1025#18 0x00000000016cbf91 in handle_connection (arg=0x6dda360) at / mysql/mysql-5.7.26/sql/conn_handler/connection_handler_per_thread.cc:306#19 0x0000000001921c64 in pfs_spawn_thread (arg=0x6d7d0f0) at / mysql/mysql-5.7.26/storage / perfschema/pfs.cc:2190#20 0x0000003823e079d1 in start_thread () from / lib64/libpthread.so.0#21 0x0000003823ae8b6d in clone () from / lib64/libc.so.6
Secondary index unique key
# 0 row_ins_scan_sec_index_for_duplicate (flags=0, index=0x7366e10, entry=0x7367ca8, thr=0x7362980, s_latch=false, mtr=0x7fffec434020, offsets_heap=0x7389038) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:2050#1 0x0000000001ae5d35 in row_ins_sec_index_entry_low (flags=0, mode=2, index=0x7366e10, offsets_heap=0x7389038, heap=0x7368538, entry=0x7367ca8, trx_id=0, thr=0x7362980 Dup_chk_only=false) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3033#2 0x0000000001ae6cea in row_ins_sec_index_entry (index=0x7366e10, entry=0x7367ca8, thr=0x7362980, dup_chk_only=false) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3382#3 0x0000000001ae6ea6 in row_ins_index_entry (index=0x7366e10, entry=0x7367ca8) Thr=0x7362980) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3431#4 0x0000000001ae73e2 in row_ins_index_entry_step (node=0x7362710, thr=0x7362980) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3579#5 0x0000000001ae7749 in row_ins (node=0x7362710 Thr=0x7362980) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3717#6 0x0000000001ae7bae in row_ins_step (thr=0x7362980) at / mysql/mysql-5.7.26/storage/innobase/row/row0ins.cc:3853#7 0x0000000001b05f73 in row_insert_for_mysql_using_ins_graph (mysql_rec=0x736e7a0 "\ 375003" Prebuilt=0x7362170) at / mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1738#8 0x0000000001b06484 in row_insert_for_mysql (mysql_rec=0x736e7a0 "\ 375003", prebuilt=0x7362170) at / mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1859#9 0x00000000019adca9 in ha_innobase::write_row (this=0x736e4b0 Record=0x736e7a0 "\ 375,003") at / mysql/mysql-5.7.26/storage/innobase/handler/ha_innodb.cc:7598#10 0x0000000000f67dc0 in handler::ha_write_row (this=0x736e4b0, buf=0x736e7a0 "\ 375003") at / mysql/mysql-5.7.26/sql/handler.cc:8062#11 0x00000000017cf173 in write_record (thd=0x6dd5660, table=0x735afa0, info=0x7fffec435b50, update=0x7fffec435ad0) at / mysql/mysql-5.7.26/sql/sql_insert.cc:1873#12 0x00000000017cc24a in Sql_cmd_insert::mysql_insert (this=0x6debbc8, thd=0x6dd5660 Table_list=0x6deb638) at / mysql/mysql-5.7.26/sql/sql_insert.cc:769#13 0x00000000017d2d7f in Sql_cmd_insert::execute (this=0x6debbc8, thd=0x6dd5660) at / mysql/mysql-5.7.26/sql/sql_insert.cc:3118#14 0x000000000159a70c in mysql_execute_command (thd=0x6dd5660, first_level=true) at / mysql/mysql-5.7.26/sql/sql_parse.cc:3596#15 0x00000000015a06c0 in mysql_parse (thd=0x6dd5660 Parser_state=0x7fffec437610) at / mysql/mysql-5.7.26/sql/sql_parse.cc:5570#16 0x0000000001595283 in dispatch_command (thd=0x6dd5660, com_data=0x7fffec437d80 Command=COM_QUERY) at / mysql/mysql-5.7.26/sql/sql_parse.cc:1484#17 0x00000000015940bc in do_command (thd=0x6dd5660) at / mysql/mysql-5.7.26/sql/sql_parse.cc:1025#18 0x00000000016cbf91 in handle_connection (arg=0x6dda360) at / mysql/mysql-5.7.26/sql/conn_handler/connection_handler_per_thread.cc:306#19 0x0000000001921c64 in pfs_spawn_thread (arg=0x6d7d0f0) at / mysql/mysql-5.7.26/storage / perfschema/pfs.cc:2190#20 0x0000003823e079d1 in start_thread () from / lib64/libpthread.so.0#21 0x0000003823ae8b6d in clone () from / lib64/libc.so.6
If there are duplicate lines, this needs to be judged:
If the duplicate row is being held by another transaction, an implicit trial lock conversion is required, and the conversion of the primary key is as follows:
-> lock_clust_rec_read_check_and_lock-> lock_rec_convert_impl_to_expl-> lock_rec_convert_impl_to_expl_for_trx
Because we know that usually insert locks do not establish displayed locks. If there is a duplicate row, the transaction holding the duplicate row data is not committed or rolled back, and its transaction is required to complete the commit or rollback, and then throw the error or continue to insert. It should be noted that when checking the uniqueness of replace/insert on dup, the LOCK_S lock is usually added, while other operations usually add LOCK_X.
If the duplicate row is not held by another transaction, the duplicate row error is thrown, but note that the error here is not an error returned to the client, but an internal error HA_ERR_FOUND_DUPP_KEY, which is called DB_DUPLICATE_KEY (convert_error_code_to_mysql) in the Innodb layer. How to deal with this error is related to the corresponding grammar.
Of course, if there are no duplicate lines, then you can proceed with the insert insert operation, and the Insert ignore/replace implementation does the insert operation. What if there is a repetition? Then let's go to the analysis.
Here we also understand that insert ignore/replace determines whether a row is duplicated by the primary key / unique key, specifically how to handle the error HA_ERR_FOUND_DUPP_KEY. If there is no unique index in the table, then even if the two pieces of data are exactly the same, they will not be marked as duplicate rows, they will be regarded as two different pieces of data, and of course insert on dup has the same logic here.
In the uniqueness test, the uniqueness of the primary key is checked first, and then the uniqueness of each unique index is checked in turn.
2. Possible rollback operations
First of all, for multi-row insert and insert select, the number of rows inserted in the innoDB layer is 1 row each time, so we should firmly establish the processing flow in the unit of behavior. We can find a large while loop in the function Sql_cmd_insert::mysql_insert, which is the processing loop.
We also need to understand that when judging uniqueness, we first judge the uniqueness of the primary key, insert the primary key data if it is satisfied, then judge the secondary unique index in turn, and insert if it is satisfied. Here is a problem, if the primary key data is inserted, but the secondary unique index is illegal and unique, then the data inserted by the previous primary key needs to be rolled back. Or if we perform the insert select operation, some of the previous rows are inserted without violating the uniqueness, but the subsequent row violates the uniqueness, then the previously inserted data also needs to be rolled back. This logical processing is performed in the function row_insert_for_mysql_using_ins_graph.
Roll back the stack frame:
# 0 row_undo_ins (node=0x73685c0, thr=0x73671e8) at / mysql/mysql-5.7.26/storage/innobase/row/row0uins.cc:466#1 0x0000000001b5d918 in row_undo (node=0x73685c0 Thr=0x73671e8) at / mysql/mysql-5.7.26/storage/innobase/row/row0undo.cc:327#2 0x0000000001b5dbae in row_undo_step (thr=0x73671e8) at / mysql/mysql-5.7.26/storage/innobase/row/row0undo.cc:411#3 0x0000000001ab1847 in que_thr_step (thr=0x73671e8) at / mysql/mysql-5.7.26/storage/innobase/que/que0que.cc:1047#4 0x0000000001ab1a63 in que_run_threads_low (thr=0x73671e8) at / mysql/mysql-5.7.26 / storage/innobase/que/que0que.cc:1111#5 0x0000000001ab1c25 in que_run_threads (thr=0x73671e8) at / mysql/mysql-5.7.26/storage/innobase/que/que0que.cc:1151#6 0x0000000001bc622b in trx_rollback_to_savepoint_low (trx=0x7fffedc5b8c0 Savept=0x7fffec434b60) at / mysql/mysql-5.7.26/storage/innobase/trx/trx0roll.cc:118#7 0x0000000001bc64d2 in trx_rollback_to_savepoint (trx=0x7fffedc5b8c0, savept=0x7fffec434b60) at / mysql/mysql-5.7.26/storage/innobase/trx/trx0roll.cc:159#8 0x0000000001b03b92 in row_mysql_handle_errors (new_err=0x7fffec434b5c, trx=0x7fffedc5b8c0, thr=0x7362980) Savept=0x7fffec434b60) at / mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:767#9 0x0000000001b05ff2 in row_insert_for_mysql_using_ins_graph (mysql_rec=0x736e7a0 "\ 375\ n", prebuilt=0x7362170) at / mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1752#10 0x0000000001b06484 in row_insert_for_mysql (mysql_rec=0x736e7a0 "\ 375\ n" Prebuilt=0x7362170) at / mysql/mysql-5.7.26/storage/innobase/row/row0mysql.cc:1859#11 0x00000000019adca9 in ha_innobase::write_row (this=0x736e4b0, record=0x736e7a0 "\ 375\ n") at / mysql/mysql-5.7.26/storage/innobase/handler/ha_innodb.cc:7598#12 0x0000000000f67dc0 in handler::ha_write_row (this=0x736e4b0, buf=0x736e7a0 "\ 375\ n") at / mysql/mysql-5.7.26/sql/handler.cc:8062#13 0x00000000017cf173 in write_record (thd=0x6dd5660, table=0x735afa0, info=0x7fffec435b50 Update=0x7fffec435ad0) at / mysql/mysql-5.7.26/sql/sql_insert.cc:1873#14 0x00000000017cc24a in Sql_cmd_insert::mysql_insert (this=0x6debbc8, thd=0x6dd5660, table_list=0x6deb638) at / mysql/mysql-5.7.26/sql/sql_insert.cc:769#15 0x00000000017d2d7f in Sql_cmd_insert::execute (this=0x6debbc8, thd=0x6dd5660) at / mysql/mysql-5.7.26/sql/sql_insert.cc:3118#16 0x000000000159a70c in mysql_execute_command (thd=0x6dd5660 First_level=true) at / mysql/mysql-5.7.26/sql/sql_parse.cc:3596#17 0x00000000015a06c0 in mysql_parse (thd=0x6dd5660, parser_state=0x7fffec437610) at / mysql/mysql-5.7.26/sql/sql_parse.cc:5570#18 0x0000000001595283 in dispatch_command (thd=0x6dd5660, com_data=0x7fffec437d80) Command=COM_QUERY) at / mysql/mysql-5.7.26/sql/sql_parse.cc:1484#19 0x00000000015940bc in do_command (thd=0x6dd5660) at / mysql/mysql-5.7.26/sql/sql_parse.cc:1025#20 0x00000000016cbf91 in handle_connection (arg=0x6dda360) at / mysql/mysql-5.7.26/sql/conn_handler/connection_handler_per_thread.cc:306#21 0x0000000001921c64 in pfs_spawn_thread (arg=0x6d7d0f0) at / mysql/mysql-5.7.26/storage / perfschema/pfs.cc:2190#22 0x0000003823e079d1 in start_thread () from / lib64/libpthread.so.0#23 0x0000003823ae8b6d in clone () from / lib64/libc.so.6
3. Handling of repetitive error HA_ERR_FOUND_DUPP_KEY
If there are duplicate lines and an error HA_ERR_FOUND_DUPP_KEY is generated, then the insert operation cannot be performed, and different operations will be done here depending on the syntax. We can find this branch processing logic in the function (write_record).
In fact, when dealing with duplicate line errors, there are three internal ways as follows:
Enum enum_duplicates {DUP_ERROR, DUP_REPLACE, DUP_UPDATE}
DUP_ERROR: this represents a normal insert/insert ignore statement.
This is our normal operation. If it is an insert operation, throw an error to the client. If it is an insert ignore operation, do not report an error, just give a warning, as follows:
`
/
If IGNORE option is used, handler errors will be downgraded
To warnings and don't have to stop the iteration. /
Mysql > insert ignore into tpk2 values (5)
Query OK, 0 rows affected, 1 warning (6 min 3.60 sec)
Mysql > show warnings
->
+-+
| | Level | Code | Message | |
+-+
| | Warning | 1062 | Duplicate entry'5' for key 'PRIMARY' |
+-+
1 row in set (0.00 sec)
However, it should be noted that after the current version reports an error, the self-increment will not fall back. -DUP_REPLACE: this represents the replace operation, which completely replaces row data that violates the uniqueness constraint with new inserted data. We usually understand that replace is the combination of delete/insert, but in fact this is not entirely the case. Apart from factors such as foreign key triggers, if the uniqueness of the violation is the last checked unique index, update will be used instead. In other cases, it is the combination of delete/insert. We can take a look at the following:
Whether if (last_uniq_key (table,key_nr) & & / / is the last unique index detected
! table- > file- > referenced_by_foreign_key () &
(! table- > triggers | |! table- > triggers- > has_delete_triggers ())
{
If (error=table- > file- > ha_update_row (table- > record [1])
Table- > record [0]) & & / / call the update interface
Error! = HA_ERR_RECORD_IS_THE_SAME)
If (error! = HA_ERR_RECORD_IS_THE_SAME)
Info- > stats.deleted++; / / number of rows affected + 1
Goto after_trg_n_copied_inc
}
Else
{
...
If ((error=table- > file- > ha_delete_row (table- > record [1])) / / Delete interface delete
Goto err
Info- > stats.deleted++; / / number of rows affected + 1
...
/ Let us attempt do write_row () once more / / there will be a loop to perform a normal insert operation
}
-DUP_UPDATE: this represents our syntax insert into on dup, which is completely waiting for the update statement. It should be noted that this is not a complete replacement. The update in the statement will be used to update the fields, and other fields will not be updated. As follows: if ((error=table- > file- > ha_update_row (table- > record [1], table- > record [0])) & & / / call into update interface error! = HA_ERR_RECORD_IS_THE_SAME) {
...
If (error! = HA_ERR_RECORD_IS_THE_SAME) info- > stats.updated++;// affects the number of rows + 1 else error= 0; in addition, for normal insert operations, the number of rows affected is usually 1. If replace/insert into on dup encounters * * repeated row changes * * (note that it is not directly inserted successfully), the number of rows returned is usually as follows:
Mysql > replace testpri2 (id,a,b) values (7)
Query OK, 2 rows affected (2.74 sec)
Mysql > insert into testpri2 (a mmmmnb','ffhhh' b) values ('mmmmnb','ffhhh') on DUPLICATE KEY UPDATE baccalaureate
Query OK, 2 rows affected (3.81 sec)
-replace: if the update API is called, the insert operation will not be performed, and the number of rows will be affected by + 1 in info- > stats.deleted++, and finally + 1 in the insert API. If the delete/insert operation is called, the insert operation will actually be performed, the normal info- > stats.deleted++ will perform the + 1 operation, and finally there will be + 1 in the insert interface. So they still turn out to be 2. -insert into on dup: definitely call the update API and will not actually perform the insert operation. It will affect the number of rows + 1 in info- > stats.updated++;, and then directly use info- > stats.copied++ to affect the number of rows + 1, so it is 2. So it should not be surprising that the number of rows affected is 2 no matter how it looks. # other: DML receipt API: 0 my_ok (thd=0x6ddc080, affected_rows=1, id=0, message=0x7fffec4357d0 "Rows matched: 1 Changed: 1 Warnings: 0") at / mysql/mysql-5.7.26/sql/sql_class.h:47531 0x0000000001652574 in mysql_update (thd=0x6ddc080, fields= … , values=... , limit=18446744073709551614, handle_duplicates=DUP_ERROR, found_return=0x7fffec435d28,updated_return=0x7fffec435d20) at / mysql/mysql-5.7.26/sql/sql_update.cc:10922 0x00000000016580ea in Sql_cmd_update::try_single_table_update (this=0x6df2350, thd=0x6ddc080, switch_to_multitable=0x7fffec435dcf) at / mysql/mysql-5.7.26/sql/sql_update.cc:28913 0x0000000001658637 in Sql_cmd_update::execute (this=0x6df2350, thd=0x6ddc080) at / mysql/mysql-5.7.26/sql/sql_update.cc:30184 0x000000000159a7ba in mysql_execute_command (thd=0x6ddc080 First_level=true) at / mysql/mysql-5.7.26/sql/sql_parse.cc:36065 0x00000000015a06c0 in mysql_parse (thd=0x6ddc080, parser_state=0x7fffec437610) at / mysql/mysql-5.7.26/sql/sql_parse.cc:55706 0x0000000001595283 in dispatch_command (thd=0x6ddc080, com_data=0x7fffec437d80) Command=COM_QUERY) at / mysql/mysql-5.7.26/sql/sql_parse.cc:14847 0x00000000015940bc in do_command (thd=0x6ddc080) at / mysql/mysql-5.7.26/sql/sql_parse.cc:10258 0x00000000016cbf91 in handle_connection (arg=0x6dd9480) at / mysql/mysql-5.7.26/sql/conn_handler/connection_handler_per_thread.cc:3069 0x0000000001921c64 in pfs_spawn_thread (arg=0x6da4430) at / mysql/mysql-5.7.26/storage/perfschema/pfs.cc 219010 0x0000003823e079d1 in start_thread () from / lib64/libpthread.so.011 0x0000003823ae8b6d in clone () from / lib64/libc.so.6 the answer to the question about what is pt-online-schema-change is shared here. I hope the above content can help you to a certain extent, if you still have a lot of doubts to be solved, you can follow the industry information channel to learn more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.