Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of mysqldump process

2025-02-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "mysqldump process example Analysis". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "mysqldump process example Analysis"!

Important parameters

First, let's map the parameters to the internal variables and take a look at their comments:

-single-transaction: opt_single_transaction

Creates a consistent snapshot by dumping all tables in a single transaction. Works ONLY for tables stored in storage engines which support multiversioning (currently only InnoDB does); the dump is NOT guaranteed to be consistent for other storage engines. While a-single-transaction dump is in process, to ensure a valid dump file (correct table contents and binary log position), no other connection should use the following statements: ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, as consistent snapshot is not isolated from them. Option automatically turns off-lock-tables.

The exported data is a consistent snapshot by encapsulating the export operation in a transaction. It works only if the table uses a storage engine that supports MVCC (currently only InnoDB); other engines cannot guarantee that the export is consistent. To ensure that the export file is valid (correct table data and binary log location) when the-single-transaction option is enabled for export, make sure that no other connections execute statements such as: ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, which will invalidate consistent snapshots. When this option is turned on, lock-tables is automatically turned off.

-master-data: opt_master_data

This causes the binary log position and filename to be appended to the output. If equal to 1, will print it as a CHANGE MASTER command; if equal to 2, that command will be prefixed with a comment symbol. This option will turn-lock-all-tables on, unless-single-transaction is specified too (in which case a global read lock is only taken a short time at the beginning of the dump; don't forget to read about-single-transaction below). In all cases, any action on logs will happen at the exact moment of the dump. Option automatically turns-lock-tables off.

This option adds the location and file name of binlog to the output. If it is equal to 1, it will be printed as a CHANGE MASTER command; if it is equal to 2, it will be prefixed with comments. And this option automatically turns on-lock-all-tables unless-single-transaction is also set (in this case, the global read lock will only add a short period of time to start the dump, don't forget to read the-single-transaction section). In any case, all actions in the log occur at the exact time of the export. This option automatically turns off-lock-tables.

-lock-all-tables: opt_lock_all_tables

Locks all tables across all databases. This is achieved by taking a global read lock for the duration of the whole dump. Automatically turns-single-transaction and-lock-tables off.

Lock all tables in all libraries. This is achieved by holding global read locks throughout the dump. -single-transaction and-lock-tables are automatically closed.

-lock-tables: lock_tables

Lock all tables for read. (Defaults to on; use skip-lock-tables to disable.)

Add a read lock to all tables. (on by default; turn it off with-skip-lock-tables)

-flush-logs: flush_logs

Flush logs file in server before starting dump. Note that if you dump many databases at once (using the option-databases= or-all-databases), the logs will be flushed for each database dumped. The exception is when using-lock-all-tables or-master-data: in this case the logs will be flushed only once, corresponding to the moment all tables are locked. So if you want your dump and the log flush to happen at the same exact moment you should use-lock-all-tables or-master-data with-flush-logs.

Refresh the server's log file before starting the export. Note that if you export many databases at once (using the-databases= or-all-databases options), log refresh will be triggered when each library is exported. The exception is when-lock-all-tables or-master-data is used: the log is refreshed only once, when all tables are locked. So if you want your export and log refresh to occur at the same certain time, you need to use-lock-all-tables, or-master-data with-flush-logs.

-delete-master-logs: opt_delete_master_logs

Delete logs on master after backup. This automatically enables-master-data.

Delete the logs on the primary library after the backup is complete. This option automatically turns on-master-data.

-apply-slave-statements: opt_slave_apply (5.5)

Adds' STOP SLAVE' prior to 'CHANGE MASTER' and' START SLAVE' to bottom of dump.

Add 'STOP SLAVE',' before 'CHANGE MASTER'' and add 'START SLAVE'.' at the end of the exported file

Main code flow

Let's take a look at the code of 5. 1 and 5. 5, respectively, based on the latest trunk (5.1 talk Rev. 3909; 5.5-rev.4148).

Main process of version 5.1

Let's first take a look at version 5.1.

5320 if ((opt_lock_all_tables | | opt_master_data) & & 5321 do_flush_tables_read_lock (mysql)) 5322 goto err

If master-data or lock-all-tables is set, do the operation of FLUSH TABLES.

Let's take a look at what is done in do_flush_tables_read_lock ().

Do_flush_tables_read_lock () 4665 return 4666 (mysql_query_with_error_report (mysql_con, 0,4667 ((opt_master_data! = 0)? / / if-master-data 4668 "FLUSH / *! 40101 LOCAL * / TABLES": / / then use FLUSH LOCAL TABLES 4669 "FLUSH TABLES") | / / use FLUSH TABLE 4670 mysql_query_with_error_report (mysql_con, 0) if it is not set 4671 "FLUSH TABLES WITH READ LOCK")) / / if the above statement is executed successfully, execute this again

FLUSH TABLES first, then use FLUSH TABLES WITH READ LOCK to add global read lock after success.

Then we will judge single-transaction.

5323 if (opt_single_transaction & & start_transaction (mysql)) 5324 goto err

If-single-transaction is defined, a transaction is opened to read the data.

Let's look at the implementation of start_transaction ()

Start_transaction () 4741 return (mysql_query_with_error_report (mysql_con, 0, 4742 "SET SESSION TRANSACTION ISOLATION" 4743 "LEVEL REPEATABLE READ") | / / set the isolation level of the session to RR 4744 mysql_query_with_error_report (mysql_con, 0, 4745 "START TRANSACTION" 4746 "/ *! 40100 WITH CONSISTENT SNAPSHOT * /"); / / start the transaction using consistent snapshot mode (RR)

The isolation level is first set to RR, and then START TRANSACTION plus the Hint of the consistent snapshot.

The next step is to get the status of Master

5338 if (opt_master_data & & do_show_master_status (mysql)) 5339 goto err

If-master-data is set, the current Master status is printed.

It is then determined that if-single-transaction is enabled, the table lock can be released because the transaction has already started.

5340 if (opt_single_transaction & & do_unlock_tables (mysql)) / * unlock but no commit! * / 5341 goto err

A UNLOCK TABLES statement is issued inside do_unlock_tables () to release the global table lock.

Do_unlock_tables () 4677 return mysql_query_with_error_report (mysql_con, 0, "UNLOCK TABLES")

Then start calling the dump_* function to export the entire instance or a library or a table as needed.

Dump_all_databases ()-> dump_all_tables_in_db () 4307 if (lock_tables) 4308 {4309 DYNAMIC_STRING query; 4310 init_dynamic_string_checked (& query, "LOCK TABLES", 256,1024); 4311 for (numrows= 0; (table= getTableName (1));) 4312 {4313 char* end= strmov (afterdot, table); 4314 if (include_table ((uchar*) hash_key,end-hash_key)) 4315 {4316 numrows++ 4317 dynstr_append_checked (& query, quote_name (table, table_buff, 1)); 4318 dynstr_append_checked (& query, "READ / *! 32311 LOCAL * /,"); 4319} 4320} 4321 if (numrows & & mysql_real_query (mysql, query.str, query.length-1)) 4322 DB_error (mysql, "when using LOCK TABLES"); 4323 / * We shall continue here, if-- force was given * / 4324 dynstr_free (& query) 4325} / * if-lock-tables (default) is set, LOCK TABLES tables_name READ is required before export. * /... 4332 while ((table= getTableName (0) 4333 {4334 char* end= strmov (afterdot, table); 4335 if (include_table ((uchar*) hash_key, end-hash_key)) 4336 {4337 dump_table (table,database); / / Export a table 4338 my_free (order_by, MYF (MY_ALLOW_ZERO_PTR)); 4339 order_by= 0 4340 if (opt_dump_triggers & & mysql_get_server_version (mysql) > = 50009) 4341 {4342 if (dump_triggers_for_table (table, database)) / / Export trigger 4343 {4344 if (path) 4345 my_fclose (md_result_file, MYF (MY_WME)); 4346 maybe_exit (EX_MYSQLERR) 4347} 4348} 4349} 4350} / * first dump_table to export the table, and then see whether it is configured-- triggers to determine whether to export Trigger,dump_triggers_for_table. * /... 4366 if (lock_tables) 4367 VOID (mysql_query_with_error_report (mysql, 0, "UNLOCK TABLES")); / * release the table lock after the export is completed * /

So we can know that if you use-master-data and-single-transaction to export data, because-lock-tables is automatically turned off, the export process will only have IS locks on the tables that are currently being exported, and tables that have completed or have not started will not be locked.

If the default-lock-tables option is used, locks for all libraries are added first, followed by an export operation, and finally all locks are released at once.

Main processes of version 5.5

Next, let's compare how the mysqldump of 5. 5 has changed.

5464 if ((opt_lock_all_tables | | opt_master_data | | 5465 (opt_single_transaction & & flush_logs)) & & 5466 do_flush_tables_read_lock (mysql)) 5467 goto err

There is a difference here, adding the judgment of flush_logs. If it is just a simple-single-transaction, do_flush_tables_read_lock () will not be called, and-flush-logs must be made at the same time.

5469 / * 5470 Flush logs before starting transaction since5471 this causes implicit commit starting mysql-5.5.5472 * / 5473 if (opt_lock_all_tables | | opt_master_data | | 5474 (opt_single_transaction & & flush_logs) | | 5475 opt_delete_master_logs) 5476 {5477 if (flush_logs | | opt_delete_master_logs) 5478 {5479 if (mysql_refresh (mysql, REFRESH_LOG)) 5480 goto err; 5481 verbose_msg ("- main: logs flushed successfully!\ n") 5482} 5483 5484 / * Not anymore! That would not be sensible. * / 5485 flush_logs= 0; 5486}

5. 5 will try FLUSH LOGS.

5488 if (opt_delete_master_logs) 5489 {5490 if (get_bin_log_name (mysql, bin_log_name, sizeof (bin_log_name) 5491 goto err; 5492}

Delete the log on master. Here, get the file name of binlog.

5494 if (opt_single_transaction & & start_transaction (mysql)) 5495 goto err

This paragraph has not changed.

5497 / * Add 'STOP SLAVE to beginning of dump * / 5498 if (opt_slave_apply & & add_stop_slave ()) 5499 goto err; 5500 if (opt_master_data & & do_show_master_status (mysql)) 5501 goto err; 5502 if (opt_slave_data & & do_show_slave_status (mysql)) 5503 goto err; 5504 if (opt_single_transaction & & do_unlock_tables (mysql)) / * unlock but no commit! * / 5505 goto err

Here are the new opt_slave_apply and opt_slave_data sections, adding STOP SLAVE statements and displaying the results of SHOW SALVE STATUS.

Then dump_* is also called to export the data.

But because MDL (Meta data lock) is available in 5.5, all tables that have been manipulated within a transaction will hold MDL when-single-transaction, so it will not be destroyed by DDL.

For example, mysqldump has backed up the tables of arecambpenc, because they are in the transaction and the transaction has not been committed, their MDL will not be released, so if another thread does the DDL operation of any table in arecoverbpenc, it will appear Waiting for table metadata lock, and the tables that have not been backed up will not hold MDL, so they can also do DDL.

At this point, I believe that everyone on the "mysqldump process example analysis" have a deeper understanding, might as well to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report