In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Walker science and technology Luo Xiaobo
Introduction
1. Take a look at mysqldump first
1.1. Interpretation of mysqldump backup process
1.2. Key steps in the process of mysqldump backup
1.2.1. The difference between FLUSH TABLES and FLUSH TABLES WITH READ LOCK
1.2.2. Modify the role of the isolation level
1.2.3. The role of using the WITH CONSISTENT SNAPSHOT clause
1.2.4. Use savepoint to set the role of the rollback point
1.3. Is there a hole in mysqldump?
1.3.1. Kengichi
1.3.2. Pit two
1.3.3. Is there any way to improve these problems?
2. Look at innobackupex now
2.1. Interpretation of innobackupex backup process
2.2. Why does innobackupex need to do this?
2.3. Is there a hole in innobackupex?
3. Total knot
Guide reading
Anyone who works in a database must know:
Advantages of mysqldump: the advantage of mysqldump is logical backup, saving data in sql format, convenient in single database, single table data migration, backup recovery and other scenarios, backup files in sql form are universal, and it is also convenient to migrate between different databases. Innodb tables can be backed up online.
Disadvantages of mysqldump: mysqldump is single-threaded, when the amount of data is large, the backup time is long, and even non-transactional tables may be locked for a long time during the backup process, which may affect the business (backup recovery time in the form of sql is also long). When mysqldump backs up, it queries all the data, which may wipe out the hot data in memory.
Innobackupex advantages: physical backup can bypass the mysql server layer, plus the backup itself is a file system-level backup, backup speed block, fast recovery, online backup, support for concurrent backup, support for encrypted transmission, support for backup speed limit
Disadvantages of innobackupex: it is troublesome to extract part of the database table data, can not recover the data based on the point in time, and can not back up remotely, can only back up locally, and the recovery of incremental backup is also troublesome. The problem of point-in-time recovery can be solved by using full + binlog incremental backup of innobackupex.
To see what these two backup tools do to the database during the backup process, you must all know: you can open general_log to check. So the question is, what does the general_log output represent? What happens if we don't do this? Are there any holes in these two backup tools that are usually ignored? Please take a look at the following analysis, perhaps. You will find that you don't seem to know so much about these two backup tools before!
Environmental information
Server configuration:
* CPU:4 vcpus
* memory: 4G
* disk: 250g SAS
* Network card: Speed: 1000Mb/s
Operating system: CentOS release 6.5 (Final)
Database version: MySQL 5.7.17
Xtrabackup version: 2.4.4
Master / slave IP (master / slave replication architecture is required for some of the demonstration steps in this article):
* main Library: 192.168.2.111 (hereinafter referred to as Library A)
* Slave Library: 192.168.2.121 (hereinafter referred to as Library B)
Database key configuration parameters
* main library: Duoyi, log_slave_updates,log-bin,binlog_rows_query_log_events=ON,server-id=3306111,gtid_mode=ON,enforce_gtid_consistency=ON,auto_increment_increment=2,auto_increment_offset=1
* Reserve library: double one, log_slave_updates,log-bin,binlog_rows_query_log_events=ON,server-id=3306121,gtid_mode=ON,enforce_gtid_consistency=ON,auto_increment_increment=2,auto_increment_offset=2
Test library table creation (here two tables are created under the same library, one for the innodb engine and one for the myisam engine)
1. Take a look at mysqldump first
Interpretation of 1.1.mysqldump backup process
Usually, during a mysqldump backup, in order to keep the locking time in the database as short as possible, you will use the-- single-transaction option to start a consistent snapshot transaction, and in order to get a binlog pos point consistent with the data during the backup, you will use the-- master-data option. Now log in to the A library host and use these two options to perform the backup demonstration.
First open general_log in the database:
Using mysqldump backup (using strace to capture the call stack during execution), take the backup test library luoxiaobo as an example to demonstrate:
DNSKiarzh0eAmdiclS8EPKIh45V5BfWGN8uNj1Q/0?wx_fmt=png ">
After the backup is complete, look at the contents of general_log (get rid of some useless information):
To view the call stack information crawled by strace, it is limited to space. For more information, please see the link to "know Notes":
Http://5d096a11.wiz03.com/share/s/1t2mEh0a-kl_2c2NZ33kSiac3oxBB40tGQNY2L6Z_M2LtLbG
Does the strace message above look a lot like the information in general_log? Because what is recorded in general_log is the sql statement sent by mysqldump:
From the comparison of general_log and strace information above, we can see that strace information represents what request information the mysqldump process sent to the database process, and general_log represents all the client-side sql request operation records in the database, which is the well-known key step in the mysqldump backup process. The question is, why are these steps needed in the mysqldump backup process? What happens if you don't? The following steps are explained in detail using the demo steps one by one
1.2. Key steps in the process of mysqldump backup
1.2.1. The difference between FLUSH TABLES and FLUSH TABLES WITH READ LOCK
FLUSH TABLES
Force closing all tables in use and flushing the query cache, removing all query cache results from the query cache, similar to the behavior of the RESET QUERY CACHE statement
In the MySQL 5.7official documentation description, when a table is in LOCK TABLES... When READ statements are locked, FLUSH TABLES statements are not allowed (another session execution of FLUSH TABLES will be blocked), if LOCK TABLES has been used. To refresh a table when the READ statement adds a read lock to another table, you can use FLUSH TABLES tbl_name in another session. WITH READ LOCK statement (I'll talk about it later)
Note:
* if a table lock is added to a table by using a LOCK TABLES statement in a session, the execution of the FLUSH TABLES statement in another session will be blocked before the table lock is released
* if one session is executing a DDL statement, another session will be blocked if it executes a FLUSH TABLES statement
* if one session is performing a large DML transaction (the DML statement is executing and the data is being modified instead of using lock in share mode and for update statements to explicitly lock it), the other session will be blocked if the FLUSH TABLES statement is executed
FLUSH TABLES WITH READ LOCK
Close all open tables and use a global read lock to lock all tables under the entire instance. At this point, you can easily use the file system that supports snapshots for snapshot backups, and after the backup is complete, use the UNLOCK TABLES statement to release the lock.
The FLUSH TABLES WITH READ LOCK statement acquires a global read lock, not a table lock, so it does not behave like LOCK TABLES and UNLOCK TABLES statements. When LOCK TABLES and UNLOCK TABLES statements are mixed with transactions, they will interact with each other, as follows:
* if a table uses a LOCK TABLES statement to lock, opening a transaction will cause the table lock of the table to be released (note that any table lock will be released as long as there is a table lock, and this phenomenon must be caused by an operation in the same session), just like executing a UNLOCK TABLES statement, but opening a transaction with a FLUSH TABLES WITH READ LOCK statement and a global read lock will not cause the global read lock to be released
* if you open a transaction and then use LOCK TABLES statements to lock and FLUSH TABLES WITH READ LOCK statements to add global read locks within the transaction (note that table locks are added to any table, as long as LOCK TABLES is used), it will cause the transaction to commit implicitly
* if you open a transaction and then use the UNLOCK TABLES statement within the transaction, it is invalid
* there is also a sentence in the official document: "if a table uses a LOCK TABLES statement to add a table lock, it will cause all transactions of the table to be implicitly committed when unlocked using the UNLOCK TABLES statement." personally, I think this is a theoretical statement, or my ability is limited. I have not thought of the possible cause of this situation, because in fact, when using the LOCK TABLES statement, opening a transaction will cause automatic unlocking (mentioned earlier). However, if using the LOCK TABLES statement within a transaction causes the transaction to commit implicitly (as mentioned earlier), it is actually impossible to unlock the LOCK TABLES statement using the UNLOCK TABLES statement within the transaction, but if you use the FLUSH TABLES WITH READ LOCK statement, if there is a table lock added by LOCK TABLES before the statement is executed, the FLUSH TABLES WITH READ LOCK statement will block, and if the FLUSH TABLES WITH READ LOCK statement has been executed, the LOCK TABLES statement will block No more table locks and mutexes can be acquired (new non-select and show requests will be blocked). So it is impossible to cause an implicit commit when an UNLOCK TABLES statement is unlocked
Note:
* the FLUSH TABLES WITH READ LOCK statement will not block the writing of log tables, such as query logs, slow query logs, etc.
* FLUSH TABLES WITH READ LOCK statement is not compatible with XA protocol
* if a table lock is added to a table using a LOCK TABLES statement in a session, the execution of the FLUSH TABLES WITH READ LOCK statement in another session will be blocked before the table lock is released, and if the setting time of the lock_wait_timeout parameter in the database is too short, mysqldump will cause the backup to fail and exit due to the timeout of executing the FLUSH TABLES WITH READ LOCK statement to obtain the global read lock
* if one session is executing a DDL statement, another session will be blocked if the FLUSH TABLES WITH READ LOCK statement is executed. If the setting time of the lock_wait_timeout parameter in the database is too short, mysqldump will fail to exit the backup due to the timeout of executing the FLUSH TABLES WITH READ LOCK statement to obtain the global read lock.
* if a session is performing a large DML transaction (the DML statement is executing and the data is being modified, instead of using lock in share mode and for update statements to explicitly lock it), then another session will be blocked if the FLUSH TABLES WITH READ LOCK statement is executed. If the time for setting the lock_wait_timeout parameter in the database is too short, mysqldump will fail to exit the backup due to executing the FLUSH TABLES WITH READ LOCK statement to obtain the global read lock timeout.
FLUSH TABLES tbl_name [, tbl_name]... WITH READ LOCK
Refreshes the table and acquires the read lock for the specified table. This statement first acquires the exclusive MDL lock of the table, so you need to wait for all transactions of the table to be committed. Then refresh the table cache of the table, reopen the table, and acquire the table read lock (similar to LOCK TABLES... READ), and demote the MDL lock from exclusive to shared. After the statement acquires the table read lock and degrades the MDL lock, other sessions can read the table, but cannot modify the table data and its table structure.
RELOAD and LOCK TABLES permissions are required to execute this statement
This statement applies only to the base table (persistent table), not to temporary tables, and is automatically ignored. In addition, using this statement on the view will cause an error.
Similar to the LOCK TABLES statement, after using this statement to lock a table, it is automatically unlocked when a transaction is opened in the same session
The official MySQL5.7 documentation describes that this new variant syntax makes it possible to refresh a table while adding a read lock to a table, which solves the problem of using LOCK TABLES for a table. When adding a read lock to the READ statement, you need to refresh the problem that the table cannot use the FLUSH TABLES statement. You can use FLUSH TABLES tbl_name [, tbl_name] at this time. WITH READ LOCK statement instead, however, the official description is not very clear, the actual use of LOCK TABLES in the same session. When a read lock is added to a READ statement, execution of the statement is not allowed (regardless of whether the operation table is the same table or not), an error will be reported: ERROR 1192 (HY000): Can't execute the given command because you have active locked tables or an active transaction, but if in different sessions, if the table is different, the execution is allowed, and the table is the same, then FLUSH TABLES tbl_name [, tbl_name]... Wait for WITH READ LOCK statement occurs
When this statement is executed repeatedly in the same session, no matter whether it is the same table or not, an error will be reported: ERROR 1192 (HY000): Can't execute the given command because you have active locked tables or an active transactio. If it is a different session, different tables are allowed to execute, but wait occurs if the table is the same.
1.2.2. Modify the role of the isolation level
Why execute the SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ statement? Because you need to use the START TRANSACTION /! 40100 WITH CONSISTENT SNAPSHOT statement to open a consistency transaction snapshot, according to the transaction consistency read requirements, the consistency transaction snapshot only supports RR isolation level. Executing the statement START TRANSACTION /! 40100 WITH CONSISTENT SNAPSHOT under other isolation levels will report the following warning message:
Limited to space, this article will be pushed in stages. In the next article, "how much do you know about mysqldump and innobackupex backup process (2)" we will then introduce "the role of using WITH CONSISTENT SNAPSHOT clause" and "the role of using savepoint to set rollback points" in "key steps in mysqldump backup". Please look forward to the wonderful content.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.