Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

In-depth understanding of the locking mechanism of SQL Server 2008

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Compared to SQL Server 2005, such as snapshot isolation and improved lock and deadlock monitoring, SQL Server 2008 does not make any significant changes in the behavior and characteristics of locks. A major new feature introduced by SQL Server 2008 is the ability to control lock escalation behavior at the table level. The new LOCK_ESCALATION table option allows you to enable or disable table-level lock upgrades. This new feature reduces lock contention and improves concurrency, especially for partition tables (partitioned tables).

Another change in SQL Server 2008 is that the Locks configuration setting is no longer supported. Also no longer supported is the timestamp data type, which has been replaced by the rowversion data type.

Why do you need a lock? In any multi-user database, there must be a consistent set of rules for data modification. For a true transactional database, when two different processes try to modify the same data at the same time, the database management system (DBMS) is responsible for resolving potential conflicts between them.

Any relational database must support the ACID attributes of transactions, namely, Atomicity, Consistency, Isolation, and Durability. The ACID property ensures that the data changes in the database are collected correctly and that the data is kept in a state consistent with the action taken.

The main function of the lock is to provide the isolation required by the transaction. Isolation ensures that transactions do not interfere with each other, that is, one given transaction does not read or modify data that is being modified by another. In addition, the isolation provided by locks helps ensure consistency between transactions. Without locks, consistent transactions are impossible.

The transaction isolation level in SQL Server determines the extent to which data being accessed or modified in one transaction is protected from modification by other transactions. In theory, each transaction should be completely isolated from other transactions. However, this is almost impossible in practice for feasibility and performance reasons. Without locking and isolation in a concurrent environment, the following four situations can occur:

Missing updates-in this case, there is no isolation between transactions. Multiple transactions can read the same data and modify it. In the end, the transaction that makes changes to the dataset wins, while the changes made by all other transactions are lost. Dirty read-in this case, one transaction can read data that is being modified by another transaction. The data read by the first transaction is inconsistent because another transaction may roll back the changes. Non-repeatable read-this situation is somewhat similar to the fact that without any isolation, one transaction reads the data twice, but another transaction modifies the data before the second read occurs; therefore, the results of the two reads are different. Because the reading operation does not guarantee that the class will be repeated every time, this situation is called "unrepeatable reading". Phantom reading-this situation is similar to unrepeatable reading. However, instead of the actual rows that were previously read changed before the transaction was completed, additional rows were added to the table, causing the second read to return a different set of rows.

SQL Server 2008 supports six isolation levels, which are

Read UncommittedRead CommittedRepeatable ReadSerializableSnapshotRead Committed Snapshot (please refer to my other blog:SQL Server 2008 R2 transaction and isolation level example for details)

The responsibility of the lock manager for resolving lock conflicts between different user processes falls to SQL Server Lock Manager. SQL Server automatically assigns locks to the process to ensure that the current user of the resource has a consistent view of the resource from the beginning to the end of a particular operation.

Lock Manager is responsible for determining the appropriate lock type (such as shared, exclusive, update) and lock granularity (such as row, page,table), depending on the type of operation being performed and the amount of data being affected.

Lock Manager also manages compatibility between lock types that try to access the same resource, resolves deadlocks, and upgrades locks to a higher level if necessary.

Lock Manager manages locks for shared data and internal system resources. For shared data, Lock Manager manages tables and row-level locks, page-level locks, and table-level locks on data pages, text pages, and leaf-level index pages. Internally, Lock Manager uses latch to manage index rows and locks on pages to control access to internal data structures and, in some cases, to retrieve individual rows of data. The latch provides better system performance because it is not as resource-intensive as the lock. The latch also provides better concurrency than locks. The latch is typically used for operations such as page splitting, deletion of index rows, and movement of rows in the index. The main difference between a lock and a latch is that the lock is held throughout the lifetime of the transaction, while the latch is held only for the duration of the operation in which it is required. Locks are used to ensure the logical consistency of data, while latches are used to ensure the physical consistency of data and data structures.

SQL Server lock type locks are handled automatically in SQL Server. Lock Manager selects the type of lock based on the transaction type (such as SELECT, INSERT, UPDATE, or DELETE). Lock Manager uses the following lock types:

Shared lock update lock exclusive lock intentional lock framework lock bulk update lock besides selecting lock type, Lock Manager also automatically adjusts lock granularity (such as row, page, table) based on the nature of the statement executed and the number of rows affected.

Shared locks by default, SQL Server applies shared locks for all read operations. As the name implies, shared locks are not exclusive. In theory, an unlimited number of shared locks can be held on a resource at any time. In addition, by default, a process locks a resource only while it is being read, and only the only shared lock exists. For example, SELECT * from authors, when the query starts, locks the first row in the authors table; when the first row is read, the lock on it is released, and the lock on the second row is released; after the second row is read, the lock on it is released, and the lock on the third row is acquired; and so on. In this way, an SELECT query allows you to modify rows of data that are not being read during a read operation. This enhances the concurrency of data access.

Shared locks are compatible not only with other shared locks, but also with update locks. Shared locks do not prevent other processes from acquiring additional shared or updated locks on a given row or page. Transactions multiple transactions or processes can hold multiple shared locks at any time, and these transactions will not affect the consistency of data. However, shared locks do prevent exclusive locks from being acquired. When a shared lock is held on a row or page, any transaction that attempts to modify its data is blocked until all shared locks are released.

The update lock is used to lock the row or page that the user process wants to modify. When a transaction attempts to modify a row, it must first read the row to ensure that it is modifying the appropriate record. If the transaction first adds a shared lock on the resource, to modify the record, it will eventually need to acquire an exclusive lock on the resource to prevent any other transaction from modifying the same record. The problem is that this can lead to deadlocks when multiple transactions try to modify the same resource at the same time. As shown in the figure.

Update locks in SQL Server are used to prevent such deadlock scenarios. Update locks are partially exclusive, which means that only unique update locks can be obtained on any resource at any time. However, update locks are compatible with shared locks, that is, they can be acquired by the same asset at the same time. In fact, updating the lock means that a process wants to modify a record and excludes other processes that also want to modify the record. However, update locks allow other processes to acquire shared locks to read data until the UPDATE or DELETE statement finishes locating the affected record. The process then attempts to upgrade each update lock to an exclusive lock. At this point, the process waits for all shared locks currently held on the record to be released. When all the shared locks are released, the shared lock is upgraded to an exclusive lock. The data modification is then performed and the exclusive lock is held for the rest of the transaction.

Exclusive lock as mentioned earlier, when the transaction is ready to modify data, the exclusive lock is assigned to it. An exclusive lock on a resource ensures that no other transaction can interfere with data locked by a transaction that holds an exclusive lock. SQL Server releases the exclusive lock at the end of the transaction.

Exclusive locks are not compatible with other types. If a resource holds an exclusive lock, any other process's request to read or modify the resource will be forced to wait until the exclusive lock is released. Similarly, if another process currently holds a read lock (shared or update lock) for the resource, the exclusive lock request is also forced to queue until the resource becomes available.

Intentional locks do not really constitute a locking method, but act as a mechanism to indicate the types of locks held at a lower level of granularity. There are three types of intention locks (corresponding to the three lock types mentioned earlier): shared intention lock, exclusive intention lock, and newer intention lock. For example, a table-level shared intention lock held by a process means that the process currently holds a shared lock at the row or page level of the table. The existence of intention locks prevents attempts by other transactions to acquire table-level locks that are incompatible with existing row or page-level locks.

Intention locks improve the performance of SQL Server locks. It allows locks to be checked at the table level to determine the type of lock held at the row or page level of the table, rather than multiple locks at the row or page level in the table.

When monitoring lock activity, you will typically see three types of intention locks: intention shared lock (IS), intention exclusive lock (IX), and intention exclusive shared lock (SIX).

The IS lock indicates that the process currently holds or intends to hold a shared lock on a low-level resource (row or page).

The IX lock indicates that the process currently holds or intends to hold an exclusive lock on low-level resources.

SIX locks occur in special cases, when a transaction holds a shared lock on a resource and later requires an intentional exclusive lock (IX), when the S lock is converted to a SIX lock.

Schema locks SQL Server use schema locks to maintain the integrity of the table structure. Unlike other lock types that provide data isolation, schema locks provide schema isolation of database objects such as tables, views, and indexes in transactions. Lock Manager provides two types of schema locks:

Schema Stability Lock (Sch-S)-SQL Server places a Sch-S lock on an object when an index or data page is referenced by a transaction. This ensures that no other transaction, such as deleting an index or deletion, modifying a stored procedure or table, can modify the object's Schema while other processes are still referencing the object.

Schema modification lock (Sch-M)-when a process needs to modify the structure of an object (such as modifying a table, recompiling a stored procedure), Lock Manager puts a Sch-M lock on the object. During the existence of the lock, no other transaction can reference the object until the modification of the object structure is completed and committed.

Bulk update lock (BU) bulk update lock is a special type of lock that is used only when bulk copying data into a table using the bcp utility or the BULK INSERT command. BU locks can be used for bulk data copy operations only if the TABLOCK prompt is specified for the bcp or BULK INSERT command, or if the table lock on bulk load table option is set using sp_tableoption. Bulk update (BU) locks allow multiple bulk copy processes to bulk copy data concurrently to the same table while preventing other processes that do not bulk copy data from accessing the table. If any other process holds a lock on the table, you cannot impose a BU lock on the table.

SQL Server lock granularity is essentially the total amount of data locked as part of a query or update in order to provide complete isolation and serialization of transactions. Lock Manager needs to strike a balance between concurrent access to resources and the administrative overhead of maintaining a large number of low-level locks. For example, the smaller the granularity of locks, the greater the number of concurrent users who can access the same table at the same time, but the greater the administrative overhead of maintaining these locks. The larger the granularity of the lock, the less overhead is required to manage the lock, and the concurrency is reduced. The following figure illustrates the trade-off between lock size and concurrency.

Currently, SQL Server balances performance and concurrency by locking at the row or higher level. Based on various factors, such as key distribution, number of rows, row density, query parameters (SARGs), etc., Query Optimizer makes lock granularity choices internally, so programmers don't have to worry about it. SQL Server provides a large number of T_SQL extensions that allow you to better control query behavior from a lock perspective.

SQL Server provides the following lock levels:

DATABASE-whenever a SQL Server process is using a database other than master, Lock Manager grants the process a database-level lock. Database-level locks are always shared locks and are used to track when the database is in use in case other processes delete the database, set the database offline, or restore the database. Note that since master and tempdb databases cannot be deleted or taken offline, there is no need to lock them. FILE-- file-level locks are used to lock database files. The EXTENT-Extent lock is used to lock the extents and is usually used only when space is allocated and reallocated. An extent consists of eight consecutive data or index pages. The Extent lock can be a shared lock or an exclusive lock. ALLOCATION_UNIT-used on database allocation units. TABLE-this level of lock locks the entire table, including data and indexes. Examples of when a table-level lock will be acquired include selecting all rows from a table that contains a large amount of data at the Serializable isolation level, and executing update or delete on the table without filtering conditions. Heap or B-Tree (HOBT)-- binary tree structure for heap data pages, or indexes. PAGE-using page-level locks, the entire page consisting of 8KB data or index information is locked. Page-level locks are acquired when you need to read all rows of a page or when you need to perform page-level maintenance such as updating the page pointer after a page split. Row ID (RID)-using RID locks, a single row within the page is locked. RID locks are acquired whenever it is valid and possible to provide maximum concurrency access to resources. KEY-- SQL Server uses two types of Key locks. The use of one depends on the lock isolation level of the current session. For transactions running in Read Committed or Repeatable Read isolation mode, SQL Server locks the actual index key associated with the accessed row. (in the case of a clustered index of a table, the data row is at the leaf level of the index. What you see on these lines are Key locks rather than row-level locks. If in Serializable isolation mode, SQL Server prevents "phantom reading" by locking a range of key values so that new rows are not allowed to insert into that range These locks are called "key-range lock" because of them. METADATA-used to lock system catalog information (metadata). APPLICATION-allows users to define their own locks, specifying the resource name, lock mode, owner, and timeout interval.

Serialization and Key-Range Locking, as mentioned earlier, SQL Server prevents "phantom reading" through key-range locks. The following describes how key-range locks work with various lock modes.

Key-Range Locking for a Range Search in the case of key-range locks involving scope lookups, SQL Server locks on the index page of the data range contained in the WHERE clause of the query. (for clustered indexes, the actual data rows in the table are locked. Because the interval is locked, other transactions are not allowed to insert new rows into that interval This is shown in the following figure.

Key-Range Locking When Searching Nonexistent Rows in the case of this type of lock, if the transaction attempts to delete or read rows that do not exist in the database, the query should not find any rows at a later stage of the transaction. This is shown in the following figure.

The comparison between row-level locks and page-level locks has been debated for years and continues in some circles as to whether row-level locks are superior to page-level locks. Many people insist that row-level locks are unnecessary if databases and applications are well designed and optimized. This view was born when row-level locks didn't even exist. (before SQL Server 7. 0, the smallest data unit that could be locked was a page. At that time, however, the page size in SQL Server was only 2KB. As the page size expands to 8KB, a single page can contain more rows (four times as many as before). Locks on 8KB pages can lead to more page-level competition because different processes are more likely to request rows of data on the same page. Using row-level locks increases the concurrency of data access.

Row-level locks, on the other hand, consume more resources (memory and CPU) than page-level locks, because there are more rows in the table than pages. If the process needs to access all the rows on the page, locking the entire page is more efficient than acquiring a lock per row. This reduces the number of in-memory lock structures that Lock Manager needs to manage.

Which is better-better concurrency or lower administrative overhead? As mentioned earlier, there needs to be a balance between the two. When the granularity of the lock becomes smaller, concurrency is improved, but performance is degraded due to additional overhead. As the lock granularity becomes larger, performance is improved due to reduced administrative overhead, but concurrency is reduced. Depending on the application, database design, and data (volume size), which is more appropriate for specific analysis, row-level locks or page-level locks.

SQL Server automatically decides at run time whether to lock rows, pages, or an entire table, based on the nature of the query, the size of the table, and the number of rows expected to be affected. In general, SQL Server more often tries to apply row-level locks rather than page-level locks to provide optimal concurrency. Today, with faster CPU and more memory support, row-level locks are no longer as expensive as they used to be. However, when the query process and the number of resources actually locked exceeds a certain threshold, SQL Server may try to upgrade from a low-level lock to an appropriately higher level.

The most likely causes of lock contention and deadlock SQL Server application performance problems are poor query statements, poor database and index design, and lock contention. The first two problems will lead to poor application performance regardless of the number of users of the system, while the performance problems caused by lock competition appear with the increase in the number of users. it tends to become more complicated as transactions become more complex or run longer.

Lock contention occurs when the lock type of a transaction request is not compatible with the existing lock type on the resource. By default, the process waits indefinitely for lock resources to become available. If there is a significant lack of response from SQL Server in the client application, you should be wary of lock contention.

The following figure shows an example of lock competition.

Set the lock timeout interval if you don't want the process to wait indefinitely for locks to become available, SQL Server allows you to set the lock timeout interval using the SET LOCK_TIMEOUT command. You specify the timeout interval in milliseconds. For example, if you want the process to wait only five seconds before the lock becomes available, execute the following command

SET LOCK_TIMEOUT 5000

If the request for a lock resource times out, the statement will abort and you will get the following Error Message:

Server: Msg 1222, Level 16, State 52, Line 1

Lock request time out period exceeded.

To view the current LOCK_TIMEOUT settings, you can use the system function @ @ lock_timeout.

Select @ @ lock_timeout

If you want the process to abort immediately when the lock cannot be obtained, set

LOCK_TIMEOUT 0

If you want to reset timeout indefinitely, then set

LOCK_TIMEOUT-1

Minimize lock contention to maximize concurrency and application performance, you should minimize lock contention between processes as much as possible. Here are some general guidelines:

Keep transactions as short and concise as possible. The shorter the transaction holds the lock, the less likely it is that lock contention will occur; remove the necessary commands from the transaction for locking units of work that are not managed by the transaction.

The statements that make up the transaction are processed as a separate batch command to eliminate unnecessary delays caused by network delays between BEGIN TRAN and COMMIT TRAN statements.

Consider writing transaction code entirely using stored procedures. Typically, stored procedures run faster than batch commands.

Commit updates as early as possible in the cursor. Because cursor processing is much slower than collection-oriented processing, locks are held for longer.

Use the lowest level of lock isolation required by each process. For example, if dirty reads are acceptable and do not require accurate results, consider using transaction isolation level 0 (Read Uncommitted) and Repeatable Read or Serializable isolation level only when absolutely necessary.

User interaction is never allowed between BEGIN TRAN and COMMIT TRAN statements, as doing so may hold the lock indefinitely.

Minimize "hot spots" in the table. Hotspots appear when most of the Update activity in the table occurs in a small number of pages.

Deadlock occurs when two processes are waiting for each other's currently locked resource. The two processes can neither move forward nor release the lock currently held until they acquire the lock on the requested resource.

Two types of deadlocks can occur in SQL Server:

Loop deadlock-A loop deadlock occurs when two processes request a lock on a different resource, and each process needs a lock on that resource held by the other. As shown in the following picture.

Conversion deadlock-two or more processes hold a shared lock on the same resource in a transaction and all want to upgrade it to an exclusive lock, but no one can upgrade it until other processes release the shared lock. As shown in the figure. People often think that deadlocks occur at the data page level or the data row level. In fact, deadlocks often occur at the index page level or index key level. The following figure shows a deadlock scenario caused by competition at the index key level.

SQL Server automatically detects when a deadlock occurs. A separate process in SQL Server called LOCK_MONITOR checks the system for deadlocks approximately every 5 seconds.

Avoiding deadlocks follows the guidelines of minimizing lock competition given above, which is helpful to eliminate deadlocks. In addition, you need to follow the following guidelines when designing an application:

Access the data of multiple tables in a consistent order to avoid circular deadlocks.

Minimize the use of HOLDLOCK, or minimize queries running in Repeatable Read or Serializable isolation mode. This will help avoid switching deadlocks.

Choose the isolation level of things wisely and cautiously. Choosing a lower isolation level may reduce deadlocks.

Table Hints for Locking mentioned earlier that you can use the SET TRANSACTION ISOLATION LEVEL command to set the isolation level for a connection. This command sets a global isolation level for the entire session, which is useful if you want to provide a consistent isolation level for your application. However, sometimes you also want to allow different isolation levels for specific queries or for different tables in a single query. SQL Server allows you to use table hints in SELECT, MERGE, UPDATE, INSERT, and DELETE statements to do this. In this way, you change the current isolation level at the session level.

Table hints for changing table-level lock isolation, granularity, or lock type are provided through the WITH operators of SELECT, UPDATE, INSERT, and DELETE statements.

Note: although many table hints can be combined, you cannot combine more than one isolation level or lock granularity on one table at a time. In addition, NOLOCK, READUNCOMMITTED, and READPAST hints cannot be used on the target table of INSERT, UPDATE, MERGE, or DELETE statements.

Transaction Isolation-Level HintsSQL Server provides a number of tips for changing the default transaction isolation level in a query.

HOLDLOCK-the shared lock is maintained during the execution of the statement, or throughout the transaction (if the statement is in the transaction). This option is equivalent to the Serializable isolation level. NOLOCK-- use this option to specify that no shared lock is imposed on the resource. It is similar to running a query under isolation level 0 (Read Uncommitted). The NOLOCK option is useful in a reporting environment where the precision of the results is not strict. READUNCOMMITTED-exactly the same as specifying the Read Uncommitted isolation level and the NOLOCK prompt. READCOMMITTED-the same as specifying the Read Committed isolation level. READCOMMITTEDLOCK-the shared lock is acquired when the data is read and released when the read is complete, regardless of whether or not the READ_COMMITTED_SNAPSHOT isolation level is set. REPEATABLEREAD-the same as specifying the Repeatable Read isolation level, similar to the HOLDLOCK prompt. SERIALIZABLE-the same as specifying the Serializable isolation level, similar to the HOLDLOCK prompt. READPAST-causes the query to ignore rows or pages locked by other transactions and return only data that can be read. Can only be used in transactions running under the Read Committed or Repeatable Read isolation level.

Lock Granularity Hints is used to change the lock granularity:

ROWLOCK-forces Lock Manager to impose row-level locks on resources instead of page-level or table-level locks. PAGLOCK-forces Lock Manager to impose page-level locks on resources instead of row-level or table-level locks. TABLOCK-forces Lock Manager to impose table-level locks on resources instead of row-level or page-level locks. TABLOCKX-forces Lock Manager to impose table-level exclusive locks on resources instead of row-level or page-level locks.

Lock Type Hints is used to change the type of lock used by SQL Server:

UPDLOCK-- similar to HOLDLOCK, except that HOLDLOCK applies shared locks on resources, while UPDLOCK applies update locks during transactions. XLOCK-applies an exclusive lock on a resource during a transaction. It prevents other transactions from acquiring locks on the resource.

Optimistic locking in many applications, the client needs to read the data for browsing, then modify some of the rows and submit the changes back to the SQL Server database. The interval between reading the data and committing the changed data can be long (if the user reads the data and goes to lunch).

In such applications, you don't want to use lock modes such as SERIALIZABLE or HOLDLOCK to lock the data, because no one can change it between the time the user reads the data and the time the update is submitted. This violates the principle of minimizing lock contention and deadlock-- user interaction in a transaction is not allowed. In a multi-user OLTP environment, holding shared locks indefinitely will have a significant impact on concurrency and the overall performance of applications due to blocking and lock competition.

On the other hand, if the row being read is not locked, another process may update one of the rows of data during that time, and when the first process commits its update, it will overwrite the previous changes made by the other process, resulting in Lost Update.

So how do you implement such an application? How do you get users to read data without locking it and still ensure that Lost Update does not occur?

Optimistic locking is a technique used when there is a long interval between reading data and committing changes. Optimistic locks prevent one client from overwriting another client's changes to the data and do not need to hold a lock in the database.

There are two ways to achieve optimistic locking, one is to use the rowversion data type, and the other is to take advantage of the optimistic concurrency feature of snapshot isolation.

Implementing optimistic locks using rowversion data types SQL Server 2008 provides a special data type rowversion that can be used to implement optimistic locks in applications. The rowversion data type acts as a version number in optimistic lock mode. Whenever a row containing a column of type rowversion data is inserted or updated, SQL Server automatically generates a value for the column. The rowversion data type is an 8-byte binary data type, and its value is not meaningful except for ensuring the uniqueness and one-way growth of the value. You can't look at every byte of it to figure out what it means.

The client reads data from the table and ensures that the returned result set contains the primary key and rowversion columns, as well as other desired data columns. Because the query does not run in a transaction, once the data is read, the lock acquired by the SELECT query is released. When users want to update a row after a period of time, they must ensure that the data has not been modified by other clients during that time. The Update statement must contain a WHERE clause to compare the retrieved rowversion value with the current value of the column in the database. If the two values match (that is, the same), the row record has not been modified during this period. So you can safely commit changes. If it does not match, the row record has been modified. To avoid Lost Update problems, this update should not be submitted.

The following is sample code for a complete implementation.

Snapshot isolation mode using optimistic lock SQL Server 2008 with Snapshot isolation level provides another mechanism for implementing optimistic locking through automatic row versioning. When Snapshot isolation mode is enabled, if a process reads data in a transaction, no lock is acquired or held on the current version of the data row. The process reads the version of the data when the query occurs. Because the data row is not locked, it does not cause blocking, and other processes can modify the data after it has been read. If another process modifies the data row, a new version of the row is generated. If the first process tries to update the data row at this time, SQL Server automatically prevents Lost Update problems by checking row version. Because the row version is different, SQL Server prevents the first process from modifying the data row. If you try to modify, an error message similar to the following appears:

Referenc

Microsoft SQL Server 2008 R2 Unleashed

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report