Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand Lock Manager Internal Locking in PostgreSQL Locks

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article focuses on "how to understand Lock Manager Internal Locking in PostgreSQL Locks". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to understand Lock Manager Internal Locking in PostgreSQL Locks.

1. Lock Manager Internal LockingLock Manager Internal Locking-- internal locking mechanism Before PostgreSQL 8.2, all of the shared-memory data structures used bythe lock manager were protected by a single LWLock, the LockMgrLock;any operation involving these data structures had to exclusively lockLockMgrLock. Not too surprisingly, this became a contention bottleneck.To reduce contention, the lock manager's data structures have been splitinto multiple "partitions", each protected by an independent LWLock.Most operations only need to lock the single partition they are working in.Here are the details: in PG 8.2 or earlier, data structures in the shared cache are protected by a single LWLock (LockMgrLock); all operations involving these data structures must hold LockMgrLock exclusive locks. Needless to say, this will become the bottleneck of the competition. To reduce contention, the lock manager data structure has been split into multiple "partitions", each partition using independent LWLock protection. Most operations only need to lock one of the partitions. Here are the implementation details: * Each possible lock is assigned to one partition according to a hash ofits LOCKTAG value. The partition's LWLock is considered to protect all theLOCK objects of that partition as well as their subsidiary PROCLOCKs.* only needs to grant locks to a partition that belongs to the LOCKTAG hash, and the LWLock of the partition is used to protect all LOCK objects of the partition. * The shared-memory hash tables for LOCKs and PROCLOCKs are organizedso that different partitions use different hash chains, and thus thereis no conflict in working with objects in different partitions. Thisis supported directly by dynahash.c's "partitioned table" mechanismfor the LOCK table: we need only ensure that the partition number istaken from the low-order bits of the dynahash hashvalue for the LOCKTAG.To make it work for PROCLOCKs, we have to ensure that a PROCLOCK's hashvalue has the same low-order bits as its associated LOCK. This requiresa specialized hash function (see proclock_hash) .Locks and PROCLOCKs are organized to store hash tables in shared memory so that different partitions use different hash chains, so that objects in different partitions do not conflict, through the dynahash.c's "partitioned table" mechanism. Just make sure that the partition number is obtained from the low bit of the dynahash hash of LOCKTAG. For PROCLOCKs, you must ensure that the hash of PROCLOCK has the same low bit as LOCK. This requires special proclock_hash functions * Formerly, each PGPROC had a single list of PROCLOCKs belonging to it.This has now been split into per-partition lists, so that access to aparticular PROCLOCK list can be protected by the associated partition'sLWLock. This rule allows one backend to manipulate another backend'sPROCLOCK lists, which was not originally necessary but is now required inconnection with fast-path locking; see below. In the past, each PGPROC had a separate PROCLOCKs linked list. Now that it has been split into a linked list for each partition, the advantage is that access to a PROCLOCK linked list can be protected by the associated partition LWLock. (this rule allows one background process to manage the PROCLOCK linked list of another background process, which is not necessary, but is now needed when using fast-path locks.) * The other lock-related fields of a PGPROC are only interesting whenthe PGPROC is waiting for a lock, lock-related domains in so we consider that they are protectedby the partition LWLock of the awaited lock.PGPROC become useful only when PRPROC waits for a lock Therefore, we consider protecting. For normal lock acquisition and release, it is sufficient to lock thepartition containing the desired lock. By partitioning awaited lock in LWLOCK. Deadlock checking needs to touchmultiple partitions in general; for simplicity, we just make it lock allthe partitions in partition-number order. (To prevent LWLock deadlock,we establish the rule that any backend needing to lock more than onepartition at once must lock them in partition-number order.) It'spossible that deadlock checking could be done without touching everypartition in typical cases, but since in a properly functioning systemdeadlock checking should not occur often enough to be performance-critical,trying to make this work does not seem a productive use of effort. For normal lock requests and releases, locking the partition that contains the target lock is sufficient. Generally speaking, deadlock detection requires access to multiple partitions. To put it simply, we just need to lock all the relevant partitions in partition order. (in order to avoid LWLock deadlock, the rule is that all background processes that need to lock more than one partition must have all partitions in order at once.) in typical cases, it is possible to do deadlock detection without visiting every partition, but because deadlock detection should not occur frequently in a functioning system, it will not have a significant impact on performance. A backend's internal LOCALLOCK hash table is not partitioned. We do storea copy of the locktag hash code in LOCALLOCK table entries, from which thepartition number can be computed, but this is a straight speed-for-spacetradeoff: we could instead recalculate thepartition number from the LOCKTAGwhen needed. The internal LOCALLOCK hash table of the background process has no partition. We do store a copy of the locktag hash value in the LOCALLOCK table entry, and the partition number can be calculated, which is a space-for-time consideration: we can recalculate the partition number from LOCKTAG. At this point, I believe you have a deeper understanding of "how to understand Lock Manager Internal Locking in PostgreSQL Locks". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report