In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
MySql row-level locks and table-level locks, I believe that many inexperienced people do not know what to do, so this paper summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.
MySql row-level lock table-level lock
How to ensure the consistency and effectiveness of data concurrent access is a problem that must be solved in all databases, and lock conflicts are also an important factor affecting the performance of database concurrent access. From this point of view, locks are particularly important and more complex for databases.
Overview of MySQL Lock
Compared with other databases, the locking mechanism of MySQL is relatively simple, and its most prominent feature is that different storage engines support different locking mechanisms.
such as
The MyISAM and MEMORY storage engines use table-level locks (table-level locking).
The InnoDB storage engine supports both row-level locks (row-level locking) and table-level locks, but row-level locks are used by default.
The characteristics of the three locks of MySQL can be roughly summarized as follows
Table-level lock: low overhead, fast locking; no deadlock; large lock granularity, the highest probability of lock conflict and the lowest concurrency.
Row-level locks: expensive and slow to add locks; deadlocks occur; locking granularity is the smallest, the probability of lock conflicts is the lowest, and the degree of concurrency is the highest.
Page lock: the overhead and locking time are between table lock and row lock; deadlocks occur; lock granularity is between table lock and row lock, and the concurrency is general.
Only from the point of view of locks: table-level locks are more suitable for applications that focus on queries, with only a small amount of data updated according to index conditions, such as Web applications, while row-level locks are more suitable for applications with a large number of concurrent updates of a small amount of different data according to index conditions and concurrent queries, such as some online transaction processing (OLTP) systems. The following sections focus on MySQL table locks and InnoDB row locks.
MyISAM table lock
The MyISAM storage engine only supports table locks, which is the only lock type supported in the first few versions of MySQL. With the increasing requirements of applications for transaction integrity and concurrency, MySQL began to develop a transaction-based storage engine, and then slowly emerged a BDB storage engine that supports page locks and an InnoDB storage engine that supports row locks (actually InnoDB is a separate company that has now been acquired by Oracle). But MyISAM's table lock is still the most widely used lock type. This section details the use of MyISAM table locks.
Query table-level lock contention
Table locking contention on the system can be analyzed by examining the table_locks_waited and table_locks_immediate state variables:
Mysql > show status like 'table%'
| | Variable_name | Value |
| | Table_locks_immediate | 2979 | |
| | Table_locks_waited | 0 | |
2 rows in set (0.00 sec)
If the value of Table_locks_waited is high, there is a serious table-level lock contention condition.
Lock mode of MySQL table-level lock
MySQL has two modes of table-level locking: table shared read lock (Table Read Lock) and table exclusive write lock (Table Write Lock).
The read operation of the MyISAM table will not block other users' read requests to the same table, but will block write requests to the same table.
The write operation of MyISAM table will block other users' read and write operations on the same table.
The MyISAM table is serial between read and write operations, as well as between write operations
When a thread acquires a write lock on a table, only the thread that holds the lock can update the table. The read and write operations of other threads wait until the lock is released.
Get the WRITE lock of table film_text
Mysql > lock table film_text write
Query OK, 0 rows affected (0.00 sec)
Currently, session can perform query, update, and insert operations on locked tables:
Mysql > select film_id,title from film_text where film_id = 1001
| | film_id | title |
| | 1001 | Update Test |
1 row in set (0.00 sec)
Mysql > insert into film_text (film_id,title) values (1003 recorder Test')
Query OK, 1 row affected (0.00 sec)
Mysql > update film_text set title = 'Test' where film_id = 1001
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
Other session queries on the locked table are blocked and need to wait for the lock to be released:
Mysql > select film_id,title from film_text where film_id = 1001
wait for
Release the lock:
Mysql > unlock tables
Query OK, 0 rows affected (0.00 sec)
wait for
Session2 gets the lock, and the query returns:
Mysql > select film_id,title from film_text where film_id = 1001
| | film_id | title |
| | 1001 | Test |
1 row in set (57.59 sec)
How to add a watch lock?
MyISAM automatically locks all tables involved before executing the query statement (SELECT), and automatically adds write locks to the tables before performing update operations (UPDATE, DELETE, INSERT, etc.). This process does not require user intervention, so users generally do not need to lock the MyISAM table directly with the LOCK TABLE command. In the example, explicit locking is basically for convenience, not necessarily.
The purpose of locking the MyISAM table display is to simulate the transaction operation to a certain extent and realize the consistent reading of multiple tables at a certain point in time. For example, there is an order table orders, which records the total amount total of each order, and an order schedule order_detail, which records the subtotal subtotal of each product for each order. Suppose we need to check whether the sum of the two tables match, we may need to execute the following two SQL:
Select sum (total) from orders
Select sum (subtotal) from order_detail
At this point, if you do not lock the two tables first, you may have the wrong result, because the order_ detail table may have changed during the execution of the first statement. Therefore, the correct way should be:
Lock tables orders read local, order_detail read local
Select sum (total) from orders
Select sum (subtotal) from order_detail
Unlock tables
The following two points should be specifically stated.
The above example adds the "local" option to LOCK TABLES, and its function is to allow other users to insert records concurrently at the end of the table when the condition of concurrent insertion of MyISAM table is met. The problem of concurrent insertion of MyISAM table will be further introduced later.
When explicitly locking a table with LOCK TABLES, you must acquire all locks involving the table at the same time, and MySQL does not support lock escalation. That is, after LOCK TABLES, you can only access these explicitly locked tables, not unlocked tables; at the same time, if you add a read lock, you can only perform query operations, not update operations. In fact, this is basically the case in the case of automatic locking, where MyISAM always acquires all the locks needed by the SQL statement at once. This is why there is no Deadlock Free in the MyISAM table.
A session uses the LOCK TABLE command to add a read lock to the table film_text. This session can query the records in the locked table, but updating or accessing other tables will prompt an error; at the same time, another session can query the records in the table, but the update will cause lock waiting.
When using LOCK TABLES, you not only need to lock all the tables used at once, but also lock the same table with the same alias as in the SQL statement how many times it appears in the SQL statement, otherwise you will also make an error!
Examples are given below.
(1) obtain a read lock on the actor table:
Mysql > lock table actor read
Query OK, 0 rows affected (0.00 sec)
(2) however, accessing through an alias will prompt an error:
Mysql > select a.firstrunname.lastroomnamereb.firstroomnamerewritingb.lastroomname from actor an actor b where a.first_name = b.first_name and a.first_name = 'Lisa' and a.last_name =' Tom' and a.last_name b.last_name
ERROR 1100 (HY000): Table 'a'was not locked with LOCK TABLES
(3) aliases need to be locked separately:
Mysql > lock table actor as a read,actor as b read
Query OK, 0 rows affected (0.00 sec)
(4) the query based on the alias can be executed correctly:
Mysql > select a.firstrunname.lastroomnamereb.firstroomnamerewritingb.lastroomname from actor an actor b where a.first_name = b.first_name and a.first_name = 'Lisa' and a.last_name =' Tom' and a.last_name b.last_name
| | first_name | last_name | first_name | last_name | |
| | Lisa | Tom | LISA | MONROE | |
1 row in set (0.00 sec)
Concurrent insertion (Concurrent Inserts)
As mentioned above, the reading and writing of the MyISAM table are serial, but this is overall. Under certain conditions, MyISAM tables also support the concurrency of query and insert operations.
The MyISAM storage engine has a system variable, concurrent_insert, specifically designed to control its concurrent insertion behavior, which can be 0, 1, or 2, respectively.
When concurrent_insert is set to 0, concurrent insertion is not allowed.
When concurrent_insert is set to 1, if there are no holes in the MyISAM table (that is, no deleted rows in the middle of the table), MyISAM allows one process to read the table while another process inserts records from the footer. This is also the default setting for MySQL.
When concurrent_insert is set to 2, records are allowed to be inserted concurrently at the end of the table, regardless of whether there are holes in the MyISAM table.
The concurrent insert feature of the MyISAM storage engine can be used to solve the lock contention for the same table query and insert in the application. For example, setting the concurrent_insert system variable to 2 always allows concurrent insertions; at the same time, defragment the space by executing OPTIMIZE TABLE statements periodically during the idle period of the system to recover the middle hole caused by deleting the record. For a detailed description of the OPTIMIZE TABLE statement, you can see the section "two simple and practical optimization methods" in Chapter 18.
Lock scheduling of MyISAM
As mentioned earlier, the read and write locks of the MyISAM storage engine are mutually exclusive, and the read and write operations are serial. So how does MySQL handle that one process requests a read lock for a MyISAM table while another process requests a write lock for the same table? The answer is that the writing process acquires the lock first. Not only that, even if the read request goes to the lock waiting queue first, and after the write request, the write lock will be inserted before the read lock request! This is because MySQL believes that write requests are generally more important than read requests. This is why MyISAM tables are not suitable for applications with a large number of update operations and query operations, because a large number of update operations will make it difficult for query operations to obtain read locks and may be blocked forever. This situation can sometimes get very bad! Fortunately, we can adjust the scheduling behavior of MyISAM through some settings.
By specifying the startup parameter low-priority-updates, the MyISAM engine gives priority to read requests by default.
Lower the priority of update requests issued by the connection by executing the command SET LOW_PRIORITY_UPDATES=1.
Lower the priority of the statement by specifying the LOW_PRIORITY property of the INSERT, UPDATE, and DELETE statement.
Although the above three methods are either update priority or query priority, they can still be used to solve the serious problem of reading lock waiting in relatively important applications (such as user login system).
In addition, MySQL also provides a compromise method to adjust the read-write conflict, that is, set an appropriate value for the system parameter max_write_lock_count. When the read lock of a table reaches this value, MySQL temporarily lowers the priority of the write request, giving the read process a chance to acquire the lock.
The problems and solutions caused by the write-first scheduling mechanism have been discussed above. I would also like to emphasize one point here: some query operations that need to run for a long time will also make the writing process starve to death! Therefore, in the application, we should try to avoid long-running query operations, and do not always want to use a SELECT statement to solve the problem, because this seemingly ingenious SQL statement is often more complex and takes a long time to execute. If possible, we can use intermediate tables and other measures to do certain "decomposition" of SQL statements, so that each step of the query can be completed in a short time, so as to reduce lock conflicts. If complex queries are inevitable, try to schedule execution during database idle periods, such as some periodic statistics that can be scheduled to be performed at night.
InnoDB lock
There are two biggest differences between InnoDB and MyISAM: one is to support transactions (TRANSACTION); the other is to use row-level locks. There are many differences between row-level locks and table-level locks. In addition, the introduction of transactions also brings some new problems. Let's first introduce a little background, and then discuss the locking problem of InnoDB in detail.
1. Transaction (Transaction) and its ACID attribute
A transaction is a logical processing unit composed of a set of SQL statements. A transaction has the following four attributes, which are usually referred to as the ACID attributes of a transaction.
(Atomicity) atomicity: a transaction is an atomic operation unit whose modifications to data are either performed or not performed at all.
(Consistent) consistency: data must be in a consistent state at the beginning and completion of a transaction. This means that all relevant data rules must be applied to the modification of the transaction to maintain data integrity; at the end of the transaction, all internal data structures (such as B-tree indexes or two-way linked lists) must also be correct.
(Isolation) isolation: the database system provides an isolation mechanism to ensure that transactions are executed in an "independent" environment that is not affected by external concurrent operations. This means that the intermediate state during the transaction is not visible to the outside, and vice versa.
(Durable) persistence: after a transaction is completed, its modification of the data is permanent and can be maintained even in the event of a system failure.
A bank transfer is a typical example of a transaction.
two。 Problems caused by concurrent transaction processing
Compared with serial processing, concurrent transaction processing can greatly increase the utilization of database resources and improve the transaction throughput of the database system, thus supporting more users. However, concurrent transaction processing can also bring some problems, including the following situations.
Lost update (Lost Update): when two or more transactions select the same row and then update the row based on the initially selected value, the problem of missing updates occurs because each transaction is unaware of the existence of other transactions-the last update overwrites updates made by other transactions. For example, two editors made an electronic copy of the same document. Each editor changes its copy independently, and then saves the changed copy, thus overwriting the original document. The editor who finally saved a copy of his changes overrides the changes made by another editor. This problem can be avoided if another editor cannot access the same file before one editor completes and commits the transaction.
Dirty Reads: a transaction is modifying a record, and the data of that record is in an inconsistent state before the transaction is completed and committed; at this time, another transaction also reads the same record. If uncontrolled, the second transaction reads the "dirty" data and makes further processing accordingly, resulting in uncommitted data dependencies. This phenomenon is vividly called "dirty reading".
Non-Repeatable Reads: at some point after reading some data, a transaction reads the previously read data again, only to find that the data it reads has changed or that some records have been deleted! This phenomenon is called "unrepeatable reading".
Phantom Reads: a transaction re-reads previously retrieved data according to the same query conditions, only to find that other transactions insert new data that meets its query criteria. This phenomenon is called "phantom reading".
3. Transaction isolation level
Among the problems caused by concurrent transactions mentioned above, "update loss" should usually be avoided completely. However, preventing the loss of updates can not be solved by the database transaction controller alone, and the application needs to add the necessary locks to the data to be updated. Therefore, it should be the responsibility of the application to prevent the loss of updates.
In fact, "dirty reading", "unrepeatable reading" and "phantom reading" are all problems of database read consistency, which must be solved by the database to provide a certain transaction isolation mechanism. Database transaction isolation can be basically divided into the following two ways.
One is to lock the data before reading it to prevent other transactions from modifying the data.
The other is to generate a consistent data snapshot (Snapshot) of a data request point in time without adding any locks, and use this snapshot to provide a certain level of consistent reading (statement level or transaction level). From the user's point of view, it seems that a database can provide multiple versions of the same data, so this technology is called data multi-version concurrency control (MultiVersion Concurrency Control, referred to as MVCC or MCC), and is often called multi-version database.
Consistent read, also known as snapshot read. The MVCC mechanism is used to read committed data in undo. So its reading is non-blocking.
Consistent reading must read data that has been committed at some point in time, with one exception: data modified in this transaction, even uncommitted data can be read later in the transaction. Consistent reading refers to ordinary select statements without for update, in share mode, and so on. The committed data in undo is used, and locks are not required (except for MDL). Current reads, on the other hand, refer to update, delete, select for update, select in share mode and other statements that read the latest data in the database and lock the read rows and gap (when RR is isolated). If the lock cannot be acquired, it waits until it is acquired or times out.
The stricter the transaction isolation of the database, the less the side effects, but the greater the cost, because transaction isolation is essentially to make transactions "serialized" to a certain extent, which is obviously contradictory to "concurrency". At the same time, different applications have different requirements for read consistency and transaction isolation. For example, many applications are not sensitive to "unrepeatable reading" and "phantom reading" and may be more concerned about the ability to access data concurrently.
In order to solve the contradiction between "isolation" and "concurrency", ISO/ANSI SQL92 defines four transaction isolation levels. The isolation degree of each level is different and the allowed side effects are also different. Applications can balance the contradiction between "isolation" and "concurrency" by selecting different isolation levels according to their own business logic requirements. Table 20-5 provides a good overview of the characteristics of these four isolation levels.
Each specific database does not necessarily fully implement the above four isolation levels Oracle provides only two standard isolation levels, Read committed and Serializable, and also provides self-defined Read only isolation levels; SQL Server supports an isolation level called "snapshot" in addition to the four isolation levels defined by ISO/ANSI SQL92 above, but strictly speaking, it is an Serializable isolation level implemented with MVCC.
MySQL supports all four isolation levels, but there are some features in the implementation, such as using MVCC consistent reading under some isolation levels, but not in some cases, which will be further described in later chapters.
4. Get InnoDB row lock contention
You can analyze row lock contention on the system by checking the InnoDB_row_lock state variable.
Mysql > show status like 'innodb_row_lock%'
| | Variable_name | Value |
| | InnoDB_row_lock_current_waits | 0 | |
| | InnoDB_row_lock_time | 0 | |
| | InnoDB_row_lock_time_avg | 0 | |
| | InnoDB_row_lock_time_max | 0 | |
| | InnoDB_row_lock_waits | 0 | |
5 rows in set (0.01 sec)
If you find that lock contention is serious, such as high values of InnoDB_row_lock_waits and InnoDB_row_lock_time_avg, you can further observe the tables and data rows that have lock conflicts by setting InnoDB Monitors, and analyze the causes of lock contention.
The specific methods are as follows:
Mysql > CREATE TABLE innodb_monitor (an INT) ENGINE=INNODB
Query OK, 0 rows affected (0.14 sec)
You can then view it with the following statement:
Mysql > Show innodb status\ G
The monitor can stop viewing by issuing the following statement:
Mysql > DROP TABLE innodb_monitor
Query OK, 0 rows affected (0.05 sec)
After setting up the monitor, in the display content of SHOW INNODB STATUS, there will be detailed information about the current lock waiting, including table name, lock type, locking record and so on, which is convenient for further analysis and problem determination. After opening the monitor, by default, the monitoring content will be recorded in the log every 15 seconds, and if it is opened for a long time, the .err file will become very large, so users should remember to delete the monitoring table to close the monitor after confirming the cause of the problem, or by using the "--console" option to start the server to shut down the log file.
5. Row locking mode and locking method of InnoDB
InnoDB implements the following two types of row locks.
Shared lock (S): allows one transaction to read a row, preventing other transactions from acquiring exclusive locks for the same dataset.
Exclusive lock (X): allows transaction update data to be acquired with exclusive locks, preventing other transactions from acquiring shared read locks and exclusive write locks for the same dataset.
In addition, in order to allow row locks and table locks to co-exist and implement multi-granularity locking mechanism, InnoDB also has two kinds of internal intention locks (Intention Locks).
Both intention locks are table locks.
Intention shared lock (IS): a transaction intends to add a row shared lock to a data row, and the transaction must acquire the IS lock of the table before adding a shared lock to the data row.
Intention exclusive lock (IX): a transaction intends to add an exclusive lock to a data row, and the transaction must acquire the IX lock of the table before adding an exclusive lock to the data row.
If the lock mode of a transaction request is compatible with the current lock, InnoDB grants the requested lock to the transaction; conversely, if the two are not compatible, the transaction waits for the lock to be released.
The intention lock is automatically added by InnoDB without user intervention.
The summary is as follows:
1. For UPDATE, DELETE, and INSERT statements, InnoDB automatically adds an exclusive lock (X) to the dataset involved
two。 For ordinary SELECT statements, InnoDB does not add any locks
3. Transactions can be shown to add shared or exclusive locks to the recordset through the following statement.
Shared lock (S): SELECT * FROM table_name WHERE... LOCK IN SHARE MODE .
Exclusive lock (X): SELECT * FROM table_name WHERE. FOR UPDATE .
Use SELECT... IN SHARE MODE acquires a shared lock, which is mainly used to confirm the existence of a row record when data dependencies are needed, and to ensure that no one UPDATE or DELETE the record.
However, if the current transaction also needs to update the record, it is likely to cause deadlock. For applications that need to update the row record after locking it, SELECT... should be used. The exclusive lock is obtained by FOR UPDATE.
6. InnoDB row lock implementation
InnoDB row locking is achieved by locking the index items on the index, unlike Oracle, which is achieved by locking the corresponding data rows in the data block.
The row lock implementation feature of InnoDB means that InnoDB uses row-level locks only if the data is retrieved by index conditions, otherwise InnoDB will use table locks!
In practical applications, special attention should be paid to this feature of InnoDB row locks, otherwise it may lead to a large number of lock conflicts, thus affecting concurrency performance.
(1) InnoDB does use table locks instead of row locks when querying without index conditions.
(2) since the row lock of MySQL is a lock for index, not for records, although it accesses records of different rows
However, if you use the same index key, there will be a lock conflict. You should pay attention to this when applying design.
(3) when a table has multiple indexes, different transactions can use different indexes to lock different rows.
In addition, InnoDB uses row locks to lock data, whether it is a primary key index, a unique index, or a normal index.
(4) even if the index field is used in the condition, whether or not to use the index to retrieve data is decided by MySQL by judging the cost of different execution plans. If MySQL thinks that a full table scan is more efficient, for example, for some very small tables, it will not use indexes, in which case InnoDB will use table locks instead of row locks. Therefore, when analyzing lock conflicts, don't forget to check the execution plan of SQL to confirm that indexes are actually used. For a detailed discussion of when MySQL does not use indexes, see the introduction to the "indexing issues" section of this chapter.
7. Gap lock (Next-Key lock)
When we use range conditions instead of equality conditions to retrieve data and request shared or exclusive locks, InnoDB will lock the index entries of existing data records that meet the conditions; for records whose key values are within the range of conditions but do not exist, it is called "GAP", and InnoDB will also lock this "gap", which is the so-called Next-Key lock. If there are only 101 records in the emp table, the values of the empid are SQL under 1 and 2 SQL respectively:
Select * from emp where empid > 100 for update
Is a search of range conditions, where InnoDB locks not only records with an empid value of 101, but also "gaps" with an empid greater than 101 (these records do not exist). The purpose of InnoDB using gap lock is, on the one hand, to prevent misreading in order to meet the requirements of the relevant isolation level. For the above example, if you do not use gap lock, if other transactions insert any records with an empid greater than 100, then if the transaction executes the above statement again, false reading will occur; on the other hand, it is to meet its recovery and replication needs. The impact of its recovery and replication on the locking mechanism, as well as the use of gap locks by InnoDB under different isolation levels, will be further described in subsequent chapters.
Obviously, when using range conditions to retrieve and lock records, the locking mechanism of InnoDB will block the concurrent insertion of key values within the conditional range, which often leads to serious lock waiting. Therefore, in the practical application development, especially the applications with more concurrent inserts, we should optimize the business logic as far as possible, use equal conditions to access updated data as far as possible, and avoid using scope conditions.
In particular, in addition to using a gap lock when locking through a range condition, InnoDB will also use a gap lock if an equal condition is used to request a lock on a record that does not exist!
The need for recovery and replication, the impact on the InnoDB locking mechanism
MySQL records the successful SQL statements of INSERT, UPDATE, DELETE and other updated data through BINLOG, and thus realizes the recovery of MySQL database and master-slave replication. MySQL's recovery mechanism (replication is actually constantly doing BINLOG-based recovery in Slave Mysql) has the following characteristics.
First, the recovery of MySQL is at the SQL statement level, that is, re-executing the SQL statement in BINLOG. Unlike Oracle databases, Oracle is based on database file blocks.
Second, the BINLOG of MySQL is recorded according to the order in which transactions are committed, and the recovery is also carried out in this order. This is also different from Oralce, which recovers data according to the system update number (System Change Number,SCN). At the beginning of each transaction, Oracle allocates a globally unique SCN,SCN in the same order as the transaction starts.
From the above two points, we can see that the recovery mechanism of MySQL requires that before a transaction is committed, other concurrent transactions cannot insert any records that meet its locking conditions, that is, no illusion is allowed, which has exceeded the requirement of the ISO/ANSI SQL92 "repeatable read" isolation level. In fact, it requires the transaction to be serialized.
In addition, for "insert into target_tab select * from source_tab where..." And the SQL statement "create table new_tab... select... From source_tab where... (CTAS)", the user does not do any updates to source_tab, but MySQL does special treatment for this kind of SQL statement.
(here InnoDB adds a shared lock to source_tab instead of using multi-version data consistent reading technology! )
In the above example, simply reading the data in the source_tab table is equivalent to executing a normal SELECT statement, which can be read with consistency. This is exactly what ORACLE does. It uses multiple versions of data implemented by MVCC technology to achieve consistent reading without any lock on the source_tab. We know that InnoDB also implements multi-version data, and there is no need to add any locks to ordinary SELECT consistent reading, but here InnoDB adds a shared lock to source_tab and does not use multi-version data consistent reading technology!
Why would MySQL do that? The reason is to ensure the correctness of recovery and replication. Because it is not locked, if other transactions update the source_tab during the execution of the above statement, it may result in an error in the result of data recovery. To demonstrate this, let's repeat the previous example, except that before session_1 executes the transaction, set the value of the system variable innodb_locks_unsafe_for_binlog to "on" (its default value is off)
As can be seen from above, after setting the value of the system variable innodb_locks_unsafe_for_binlog to "on", InnoDB no longer locks the source_tab, and the result is in line with the application logic, but if you analyze the contents of the BINLOG:
SET TIMESTAMP=1169175130
BEGIN
# at 274
# 070119 10:51:57 server id 1 end_log_pos 105 Query thread_id=1 exec_time=0 error_code=0
SET TIMESTAMP=1169175117
Update source_tab set name ='8' where name ='1'
# at 379
# 070119 10:52:10 server id 1 end_log_pos 406 Xid = 5
COMMIT
# at 406
# 070119 10:52:14 server id 1 end_log_pos 474 Query thread_id=2 exec_time=0 error_code=0
SET TIMESTAMP=1169175134
BEGIN
# at 474
# 070119 10:51:29 server id 1 end_log_pos 119 Query thread_id=2 exec_time=0 error_code=0
SET TIMESTAMP=1169175089
Insert into target_tab select D1 name from source_tab where name ='1'
# at 593
# 070119 10:52:14 server id 1 end_log_pos 620 Xid = 7
COMMIT
It can be found that in BINLOG, the location of the update operation is before INSERT...SELECT. If you use this BINLOG for database recovery, the recovery result is inconsistent with the actual application logic; if you replicate, it will lead to inconsistency between the master and slave database!
Therefore, INSERT...SELECT... And CREATE TABLE...SELECT... Statement, which may prevent concurrent updates to the source table, resulting in waiting for the source table lock. If the query is more complex, it will cause serious performance problems, and we should try our best to avoid using it. In fact, MySQL calls this SQL the SQL of non-deterministic, and it is not recommended.
If you must use this SQL to implement business logic in your application, and you do not want to have an impact on the concurrent updates of the source table, you can take the following two measures:
One is to do as in the example above, setting the value of innodb_locks_unsafe_for_binlog to "on" to force MySQL to use multi-version data consistent reading. However, the price is that data may not be recovered or copied correctly with binlog, so this approach is not recommended.
Second, through the use of "select * from source_tab... Into outfile" and "load data infile." Statement combination to implement indirectly, in this way MySQL will not lock source_tab.
8. Consistency reading and lock differences of InnoDB under different isolation levels
As mentioned earlier, locks and multiversion data are InnoDB's means of achieving consistent reads and ISO/ANSI SQL92 isolation levels, so InnoDB uses different consistent read strategies and required locks when dealing with SQL at different isolation levels. At the same time, the characteristics of data recovery and replication mechanism also have a great impact on the consistent read strategy and locking strategy of some SQL. These features are summarized as shown in Table 20-16 for readers' reference.
1: gap locks are not needed when the isolation level is RC. The official documentation is as follows:
Each consistent read, even within the same transaction, sets and reads its own fresh snapshot. For information about consistent reads, see Section 14.8.2.3, "Consistent Nonlocking Reads".
For locking reads (SELECT with FOR UPDATE or LOCK IN SHARE MODE), UPDATE statements, and DELETE statements, InnoDB locks only index records, not the gaps before them, and thus permits the free insertion of new records next to locked records. Gap locking is only used for foreign-key constraint checking and duplicate-key checking.
The official document states that the address is: https://dev.mysql.com/doc/refman/5.5/en/innodb-transaction-isolation-levels.html
2: under the repeatable readable isolation level, if the index is unique and the lookup is unique, there is no gap lock, otherwise a gap lock is used. The official instructions are as follows:
REPEATABLE READ
This is the default isolation level for InnoDB. Consistent reads within the same transaction read the snapshot established by the first read. This means that if you issue several plain (nonlocking) SELECT statements within the same transaction, theseSELECT statements are consistent also with respect to each other. See Section 14.8.2.3, "Consistent Nonlocking Reads".
For locking reads (SELECT with FOR UPDATE or LOCK IN SHARE MODE), UPDATE, and DELETE statements, locking depends on whether the statement uses a unique index with a unique search condition, or a range-type search condition.
For a unique index with a unique search condition, InnoDB locks only the index record found, not the gap before it.
For other search conditions, InnoDB locks the index range scanned, using gap locks or next-key locks to block insertions by other sessions into the gaps covered by the range. For information about gap locks and next-key locks, see Section 14.8.1, "InnoDB Locking".
The official document states that the address is: https://dev.mysql.com/doc/refman/5.5/en/innodb-transaction-isolation-levels.html
9. When do I use the watch lock?
For InnoDB tables, row-level locks should be used in most cases, because transaction and row locks are often the reasons why we choose InnoDB tables. However, table-level locks can also be considered in individual special transactions.
The first case is that the transaction needs to update most or all of the data, and the table is relatively large. If the default row lock is used, not only the transaction execution efficiency is inefficient, but also other transactions may cause long lock waiting and lock conflicts. In this case, you can consider using table locks to improve the execution speed of the transaction.
The second situation is that the transaction involves multiple tables, which is more complex, which is likely to cause deadlock and cause a large number of transaction rollback. In this case, you can also consider locking the tables involved in the transaction at one time, so as to avoid deadlock and reduce the overhead of the database due to transaction rollback.
Of course, there should not be too many of these two kinds of transactions in the application, otherwise, you should consider using the MyISAM table.
Under InnoDB, you should pay attention to the following two points when using table locks.
(1) although you can add table-level locks to InnoDB by using LOCK TABLES, it must be stated that table locks are not managed by the InnoDB storage engine layer, but by the upper layer: MySQL Server. Only when autocommit=0 and innodb_table_locks=1 (default setting) are used, the InnoDB layer knows the table locks added by MySQL, and MySQL Server can perceive the row locks added by InnoDB. In this case, InnoDB can automatically identify deadlocks involving table-level locks. Otherwise, InnoDB will not be able to automatically detect and handle this deadlock. Deadlocks will be discussed in the next section.
(2) when locking the InnoDB table with LOCK TABLES, it should be noted that AUTOCOMMIT should be set to 0, otherwise MySQL will not lock the table; before the end of the transaction, do not use UNLOCK TABLES to release the table lock, because UNLOCK TABLES will implicitly commit the transaction; COMMIT or ROLLBACK can not release the table lock added by LOCK TABLES, you must release the table lock with UNLOCK TABLES. For the correct way, see the following statement.
For example, if you need to write table T1 and read from table t, you can do this as follows:
SET AUTOCOMMIT=0
LOCK TABLES t1 WRITE, t2 READ,...
[do something with tables t1 and t2 here]
COMMIT
UNLOCK TABLES
10. About deadlocks
As mentioned above, MyISAM table locks are deadlock free, because MyISAM always gets all the locks you need at once, either satisfying them or waiting, so there is no deadlock. However, in InnoDB, locks are acquired step by step except for transactions composed of a single SQL, which determines that it is possible to have a deadlock in InnoDB. An example of a deadlock occurs as shown in Table 20-17.
In the above example, both transactions need to acquire an exclusive lock held by the other to continue to complete the transaction, and this kind of cyclic lock waiting is a typical deadlock.
After a deadlock occurs, InnoDB usually automatically detects it and causes one transaction to release the lock and roll back, while the other transaction acquires the lock and continues to complete the transaction. However, when external locks or table locks are involved, InnoDB does not automatically detect deadlocks, which needs to be solved by setting the lock wait timeout parameter innodb_lock_wait_timeout. It should be noted that this parameter is not only used to solve the deadlock problem. In the case of high concurrent access, if a large number of transactions hang because they cannot get the locks they need immediately, it will take up a lot of computer resources and cause serious performance problems. Even drag across the database. We can avoid this by setting the appropriate lock wait timeout threshold.
Generally speaking, deadlocks are a problem of application design, and most of them can be avoided by adjusting business processes, database object design, transaction size, and SQL statements to access the database.
Here are several common ways to avoid deadlocks through examples.
(1) in the application, if different programs will access multiple tables concurrently, it should be agreed to access the tables in the same order as far as possible, which can greatly reduce the chance of deadlock. In the following example, because two session accesses two tables in a different order, the chance of a deadlock is very high! But if accessed in the same order, deadlocks can be avoided.
(2) when the program processes data in batches, if the data is sorted in advance to ensure that each thread processes records in a fixed order, the possibility of deadlock can also be greatly reduced.
(3) in a transaction, if you want to update the record, you should directly apply for a lock of sufficient level, that is, an exclusive lock, instead of applying for a shared lock first, and then apply for an exclusive lock when updating, because when the user applies for an exclusive lock, other transactions may have acquired the shared lock of the same record, resulting in lock conflicts or even deadlocks. For a specific demonstration, see the examples in section 20.3.3.
(4) as mentioned earlier, under the REPEATABLE-READ isolation level, if two threads add an exclusive lock to the same condition record at the same time, both threads will lock successfully if the condition record is not met. The program finds that the record does not yet exist and attempts to insert a new record, and if both threads do so, a deadlock occurs. In this case, changing the isolation level to READ COMMITTED avoids the problem.
(5) when the isolation level is READ COMMITTED, if both threads execute SELECT...FOR UPDATE first, determine whether there is a qualified record, and if not, insert the record. At this point, only one thread can insert successfully, and the other thread will have a lock wait. when the first thread commits, the second thread will make an error because of the primary key, but although this thread has made an error, it will get an exclusive lock! At this point, if a third thread comes to apply for an exclusive lock, there will also be a deadlock.
In this case, you can do the insert operation directly, and then catch the primary key weight exception, or always perform the ROLLBACK release of the exclusive lock obtained when the primary key weight error is encountered.
Although deadlocks can be greatly reduced through measures such as the design and SQL optimization described above, deadlocks are difficult to avoid completely. Therefore, it is a good programming habit to always catch and handle deadlock exceptions in programming.
If a deadlock occurs, you can use the SHOW INNODB STATUS command to determine the cause of the last deadlock. The return results include details of the deadlock-related transactions, such as the SQL statement that caused the deadlock, the locks that the transaction has acquired, what locks it is waiting for, and the transactions that have been rolled back. Based on this, the causes of deadlock and improvement measures can be analyzed.
Here is an example of a SHOW INNODB STATUS output:
Mysql > show innodb status\ G
Summary of InnoDB
This chapter focuses on the implementation characteristics of MyISAM table-level lock and InnoDB row-level lock in MySQL, and discusses the lock problems and solutions often encountered by the two storage engines.
For MyISAM table locks, the following points are mainly discussed:
(1) the shared read lock (S) is compatible, but the shared read lock (S) and the exclusive write lock (X) and the exclusive write lock (X) are mutually exclusive, that is, read and write are serial.
(2) under certain conditions, MyISAM allows concurrent execution of queries and inserts, which can be used to solve the lock contention problem of queries and inserts for the same table in the application.
(3) the default lock scheduling mechanism of MyISAM is write first, which is not necessarily suitable for all applications. Users can adjust the contention of read-write locks by setting LOW_PRIORITY_UPDATES parameters or specifying LOW_PRIORITY options in INSERT, UPDATE, and DELETE statements.
(4) because the locking granularity of the table lock is large, and the reading and writing are serial, therefore, if there are more update operations, the MyISAM table may have serious lock waiting, so we can consider using the InnoDB table to reduce lock conflicts.
For the InnoDB table, this chapter mainly discusses the following.
InnoDB's row locks are based on lock references, and InnoDB uses table locks if the data is not accessed through the index.
This paper introduces the mechanism of InnoDB gap lock (Next-key) and the reason why InnoDB uses gap lock.
Under different isolation levels, the locking mechanism and consistent read strategy of InnoDB are different.
The recovery and replication of MySQL also have a great impact on the InnoDB locking mechanism and consistent read strategy.
Lock conflicts and even deadlocks are difficult to avoid completely.
After understanding the characteristics of InnoDB locks, users can reduce lock conflicts and deadlocks through measures such as design and SQL adjustment, including:
Use a lower isolation level as much as possible
Carefully design the index and use the index to access the data as far as possible to make the locking more accurate, thus reducing the chance of lock conflict.
If you choose a reasonable transaction size, small transactions are less likely to have lock conflicts.
When locking a recordset display, it is best to request a lock of sufficient level at one time. For example, if you want to modify the data, it is best to apply for an exclusive lock directly, rather than apply for a shared lock first, and then request an exclusive lock when you modify it, which can easily lead to a deadlock.
When different programs access a set of tables, they should try to agree to access the tables in the same order, and for a table, access the rows in the table in a fixed order as much as possible. This greatly reduces the chances of deadlocks.
Try to access data with equal conditions, so as to avoid the effect of gap locks on concurrent insertions.
Do not apply for more than the actual lock level; do not show locking when querying unless you have to.
For some specific transactions, table locks can be used to improve processing speed or reduce the possibility of deadlocks.
After reading the above, have you mastered the row-level lock and table-level lock of MySql? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.