Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of Lock Mechanism in mysql Database

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the example analysis of lock mechanism in mysql database, which is very detailed and has certain reference value. Friends who are interested must read it!

Pessimistic lock and optimistic lock:

Pessimistic lock: as the name implies, it is very pessimistic. Every time you go to get the data, you think that someone else will modify it, so you lock it every time you get the data, so that others will block the data until it gets the lock. Many of these locking mechanisms are used in traditional relational databases, such as row locks, table locks, read locks, write locks and so on.

Optimistic lock: as the name implies, it is very optimistic. Every time you go to get the data, you think that others will not modify it, so you will not lock it, but when you update it, you will judge whether others have updated the data during this period. You can use mechanisms such as version number. Optimistic locks are suitable for multi-read applications, which can improve throughput, such as optimistic locks provided by databases that are similar to the write_condition mechanism.

Table level: engine MyISAM, which locks the entire table directly. During your locking, other processes cannot write to the table. If you are a write lock, other processes are not allowed to read

Page level: engine BDB, table-level lock speed is fast, but more conflicts, less row-level conflicts, but slow. So take the eclectic page level and lock the adjacent set of records at a time.

Row level: the engine INNODB, which locks only the specified records so that other processes can still operate on other records in the same table.

The characteristics of the above three locks can be roughly summarized as follows:

1) Table-level lock: low overhead, fast locking; no deadlock; large lock granularity, the highest probability of lock conflict and the lowest concurrency.

2) Page lock: the overhead and locking time are between table lock and row lock; deadlocks occur; lock granularity is between table lock and row lock, and the concurrency is general.

3) Row-level locks: high overhead and slow locking; deadlocks occur; locking granularity is the smallest, the probability of lock conflicts is the lowest, and the degree of concurrency is the highest.

Each of the three kinds of locks has its own characteristics. If only from the perspective of locks, table-level locks are more suitable for applications that are mainly query-based, with only a small amount of data updated according to index conditions, such as WEB applications; row-level locks are more suitable for applications with a large number of concurrent updates of different data according to index conditions and concurrent queries at the same time, such as some online transaction processing (OLTP) systems.

There are two modes for MySQL table-level locks:

1. Table shared read lock (Table Read Lock). When a MyISAM table is read, it does not block other users' read requests to the same table, but it blocks write operations to the same table.

2. Table exclusive write lock (Table Write Lock). Writing to the MyISAM table blocks other users' read and write operations on the same table.

The read and write of the MyISAM table are serial, that is, no write can be performed during the read operation, and vice versa. However, under certain conditions, the MyISAM table also supports concurrent query and insert operations. The mechanism is to control a system variable (concurrent_insert), and when its value is set to 0, concurrent insertion is not allowed; when its value is set to 1, if there are no holes in the MyISAM table (that is, no deleted rows in the table), MyISAM allows one process to insert records from the footer while another process reads the table. When its value is set to 2, records are allowed to be inserted concurrently at the end of the table, regardless of whether there are holes in the MyISAM table.

How to implement MyISAM lock scheduling is also a key problem. For example, when one process requests a read lock for a MyISAM table and another process requests a write lock for the same table, will mysql be treated as a priority process? The research shows that the write process will acquire the lock first (even if the read request arrives at the lock waiting queue first). But this also causes a big defect, that is, a large number of write operations will make it difficult for query operations to obtain read locks, which may cause permanent blocking. Fortunately, we can adjust the scheduling behavior of MyISAM through some settings. We can lower the priority by specifying the parameter low-priority-updates so that the MyISAM default engine gives priority to the read request and sets its value to 1 (set low_priority_updates=1).

The biggest difference between InnoDB locks and MyISAM locks is:

1. TRANCSACTION is supported.

2. Row-level locks are used.

We know that a transaction is a logical processing unit consisting of a set of SQL statements that has four attributes (ACID attributes for short), which are:

Atomicity: a transaction is an atomic operation unit in which all or none of the changes to the data are performed.

Consistent: data must be in a consistent state at the beginning and completion of a transaction

Isolation: the database system provides an isolation mechanism to ensure that transactions are executed in an "independent" environment that is not affected by external concurrent operations.

Durable: after a transaction is completed, its modification of the data is permanent, even in the event of a system failure.

Problems caused by concurrent transaction processing

Compared with serial processing, concurrent transaction processing can greatly increase the utilization of database resources and improve the transaction throughput of the database system, thus supporting more users. However, concurrent transaction processing can also bring some problems, including the following situations.

1. Update loss (Lost Update): when two or more transactions select the same row and then update the row based on the initially selected value, the problem of missing updates occurs because each transaction is unaware of the existence of other transactions-the last update overwrites updates made by other transactions. For example, two editors made an electronic copy of the same document. Each editor changes its copy independently, and then saves the changed copy, thus overwriting the original document. The editor who finally saved a copy of his changes overrides the changes made by another editor. This problem can be avoided if another editor cannot access the same file before one editor completes and commits the transaction.

2, dirty read (Dirty Reads): a transaction is modifying a record, and the data of this record is in an inconsistent state before the transaction is completed and committed; at this time, another transaction also reads the same record. If uncontrolled, the second transaction reads these "dirty" data and makes further processing accordingly, resulting in uncommitted data dependencies. This phenomenon is vividly called "dirty reading".

3. Non-Repeatable Reads: at some time after reading some data, a transaction reads the previously read data again, only to find that the data it reads has changed or some records have been deleted! This phenomenon is called "unrepeatable reading".

4. Phantom Reads: a transaction re-reads the previously retrieved data according to the same query conditions, but finds that other transactions insert new data that meets its query conditions. This phenomenon is called "illusory reading".

Transaction isolation level

Among the problems caused by concurrent transactions mentioned above, "update loss" should usually be avoided completely. However, preventing the loss of updates can not be solved by the database transaction controller alone, and the application needs to add the necessary locks to the data to be updated. Therefore, it should be the responsibility of the application to prevent the loss of updates.

In fact, "dirty reading", "unrepeatable reading" and "phantom reading" are all problems of database read consistency, which must be solved by the database to provide a certain transaction isolation mechanism. Database transaction isolation can be basically divided into the following two ways.

1. One is to lock the data before reading it to prevent other transactions from modifying the data.

2. The other is to generate a consistent data snapshot (Snapshot) of a data request point in time without any lock, and use this snapshot to provide a certain level of consistent reading (statement level or transaction level). From the user's point of view, it seems that a database can provide multiple versions of the same data, so this technology is called data multi-version concurrency control (MultiVersion Concurrency Control, referred to as MVCC or MCC), and is often called multi-version database.

The stricter the transaction isolation of the database, the less the side effects, but the greater the cost, because transaction isolation is essentially to make transactions "serialized" to a certain extent, which is obviously contradictory to "concurrency". At the same time, different applications have different requirements for read consistency and transaction isolation. For example, many applications are not sensitive to "unrepeatable reading" and "phantom reading" and may be more concerned about the ability to access data concurrently.

In order to solve the contradiction between "isolation" and "concurrency", ISO/ANSI SQL92 defines four transaction isolation levels. The isolation degree of each level is different and the allowed side effects are also different. Applications can balance the contradiction between "isolation" and "concurrency" by selecting different isolation levels according to their own business logic requirements. Table 20-5 provides a good overview of the characteristics of these four isolation levels.

Read data consistency and allowed concurrent side effects

Isolation level read data consistency dirty read unrepeatable phantom read

The lowest level of uncommitted read (Read uncommitted), which can only guarantee that physically corrupted data is not read

Degree of reference (Read committed) statement level is yes

Repeatable readable (Repeatable read) transaction level

Highest level of serializability (Serializable), transaction level no

Finally, it is important to note that each specific database does not necessarily fully implement the above four isolation levels. For example, Oracle only provides two standard isolation levels, Read committed and Serializable, as well as self-defined Read only isolation levels; SQL Server supports an isolation level called "snapshot" in addition to the four isolation levels defined by ISO/ANSI SQL92 above, but strictly speaking, it is an Serializable isolation level implemented with MVCC. MySQL supports all four isolation levels, but when it is implemented, it has some features, such as using MVCC consistent read under some isolation levels, but not in some cases.

InnoDB has two modes of row locks:

1) shared lock (S): allows one transaction to read a row, preventing other transactions from acquiring exclusive locks for the same dataset.

(Select * from table_name where. Lock in share mode)

2) exclusive lock (X): allows transaction update data with exclusive locks, preventing other transactions from acquiring shared read locks and exclusive write locks for the same dataset. (select * from table_name where... .. for update)

In order to allow row locks and table locks to coexist, a multi-granularity locking mechanism is implemented; at the same time, there are two kinds of internally used intention locks (both table locks), which are intention sharing locks and intention exclusive locks.

1) intention shared lock (IS): the transaction intends to add a row shared lock to the data row, and the transaction must acquire the IS lock of the table before adding a shared lock to the data row.

2) intention exclusive lock (IX): the transaction intends to add an exclusive lock to the data row, and the transaction must acquire the IX lock of the table before adding an exclusive lock to the data row.

InnoDB row lock mode compatibility list

Request lock mode

Is it compatible?

Current lock mode X IX S IS

X conflict

IX conflict compatibility

S conflict compatibility

IS conflict compatibility

If the lock mode of a transaction request is compatible with the current lock, InnoDB grants the requested lock to the transaction; conversely, if the two are not compatible, the transaction waits for the lock to be released.

The intention lock is automatically added by InnoDB without user intervention. For UPDATE, DELETE, and INSERT statements, InnoDB automatically adds exclusive locks (X) to the dataset involved; for normal SELECT statements, InnoDB does not add any locks; transactions can be shown to add shared or exclusive locks to the recordset through the following statement.

1. Shared lock (S): SELECT * FROM table_name WHERE … LOCK IN SHARE MODE .

2. Exclusive lock (X): SELECT * FROM table_name WHERE … FOR UPDATE .

InnoDB row locking is achieved by locking the index items on the index, unlike oracle, which is achieved by locking the corresponding data rows in the data block. The row lock implementation feature of InnoDB means that InnoDB uses row-level locks only if the data is retrieved by index conditions, otherwise InnoDB will use table locks!

In practical applications, special attention should be paid to this feature of InnoDB row locks, otherwise it may lead to a large number of lock conflicts, thus affecting concurrency performance.

Query table-level lock contention

Table Lock contention:

Table locking contention on the system can be analyzed by examining the table_locks_waited and table_locks_immediate state variables:

Mysql > show status like 'table%'; +-+-+ | Variable_name | Value | +-+-+ | Table_locks_immediate | 2979 | | Table_locks_waited | 0 | +-+-- + 2 rows in set (sec))

If the value of Table_locks_waited is high, there is a serious table-level lock contention condition.

InnoDB row lock contention:

You can analyze row lock contention on the system by checking the InnoDB_row_lock state variable:

Mysql > show status like 'innodb_row_lock%' +-+-- + | Variable_name | Value | +-+-- + | InnoDB_row_lock_current_waits | 0 | InnoDB_row_lock_time | 0 | InnoDB_row_lock_time_avg | 0 | InnoDB_row_lock_time_max | 0 | InnoDB_row_ Lock_waits | 0 | +-+-- + 5 rows in set (0.01sec)

MyISAM write lock experiment:

The read operation to the MyISAM table will not block other users' read requests to the same table, but it will block the write requests to the same table; the write operation to the MyISAM table will block other users' read and write operations to the same table; the read and write operations of the MyISAM table are serial! According to the example shown in Table 20-2, when a thread acquires a write lock on a table, only the thread that holds the lock can update the table. The read and write operations of other threads wait until the lock is released.

USER1:

Mysql > lock table film_text write

Currently, session can perform query, update, and insert operations on locked tables:

Mysql > select film_id,title from film_text where film_id = 1001

USER2:

Mysql > select film_id,title from film_text where film_id = 1001

wait for

USER1:

Release the lock:

Mysql > unlock tables

USER2:

Get the lock, and the query returns:

Shared Lock experiment of InnoDB Storage engine

USER1: mysql > set autocommit = 0; USER2: mysql > set autocommit = 0

USER1:

The current session adds the shared lock of share mode to the record of actor_id=178:

Mysql > select actor_id,first_name,last_name from actor where actor_id = 178lock in share mode

USER2:

Other session can still query the record, and you can also add share mode's shared lock to the record:

Mysql > select actor_id,first_name,last_name from actor where actor_id = 178lock in share mode

USER1:

The current session updates the locked record and waits for the lock:

Mysql > update actor set last_name = 'MONROE T' where actor_id = 178,

wait for

USER2:

Other session also updates the record, which will cause the deadlock to exit:

Mysql > update actor set last_name = 'MONROE T' where actor_id = 178,

ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction

USER1:

After obtaining the lock, you can successfully update:

Mysql > update actor set last_name = 'MONROE T' where actor_id = 178; Query OK, 1 row affected (17.67 sec) Rows matched: 1 Changed: 1 Warnings: 0

An example of an exclusive lock for the InnoDB storage engine

USER1: mysql > set autocommit = 0; USER2: mysql > set autocommit = 0

USER1:

The current session records for actor_id=178 plus for update exclusive locks:

Mysql > select actor_id,first_name,last_name from actor where actor_id = 178for update

USER2:

Other session can query the record, but cannot add a shared lock to the record, and will wait for the lock to be acquired:

Mysql > select actor_id,first_name,last_name from actor where actor_id = 178,

USER1:

The current session can update the locked record and release the lock after the update:

Mysql > update actor set last_name = 'MONROE T' where actor_id = 178,

USER2:

Other session acquires locks and records submitted by other session:

Mysql > select actor_id,first_name,last_name from actor where actor_id = 178for update

Update several important parameters of performance optimization

Bulk_insert_buffer_size

Bulk insert cache size, this parameter is for the MyISAM storage engine. It is suitable for improving efficiency when inserting 100-1000 + records at one time. The default value is 8m. It can double and increase according to the size of the data.

Concurrent_insert

Concurrent insertion, when there is no hole in the table (deleted records), when one process acquires the read lock, other processes can insert at the end of the table.

The value can be set to 0 to disallow concurrent insertion, 1 to perform concurrent insertion when there is no hole in the table, and 2 to perform concurrent insertion regardless of whether there is a hole or not.

The default is 1 for the frequency of table deletions.

Delay_key_write

Defer updating the index for the MyISAM storage engine. It means that when update records, first up the data to disk, but not up the index, store the index in memory, and write the memory index to disk when the table is closed. The value of 0 does not turn on, but 1 turns on. It is on by default.

Delayed_insert_limit, delayed_insert_timeout, delayed_queue_size

Delay insertion, give the data to the memory queue first, and then insert it slowly. But these configurations, not all storage engines support, currently, the commonly used InnoDB does not support, MyISAM support. According to the actual situation, it is generally enough by default.

The above is all the contents of the article "sample Analysis of locking Mechanism in mysql Database". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report