Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the processing logic related to WAL when inserting data in PostgreSQL

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

< MAX_BACKENDS); }#endif TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), mode); LWLockReportWaitEnd(); LOG_LWDEBUG("LWLockAcquire", lock, "awakened"); //再次循环以再次请求锁 /* Now loop back and try to acquire lock again. */ result = false; } TRACE_POSTGRESQL_LWLOCK_ACQUIRE(T_NAME(lock), mode); //获取成功! /* Add lock to list of locks held by this backend */ //在该后台进程持有的锁链表中添加锁 held_lwlocks[num_held_lwlocks].lock = lock; held_lwlocks[num_held_lwlocks++].mode = mode; /* * Fix the process wait semaphore's count for any absorbed wakeups. * 修正进程咋等待信号量计数的其他absorbed唤醒。 */ while (extraWaits-- >

0) PGSemaphoreUnlock (proc- > sem); return result;} / * * Internal function that tries to atomically acquire the lwlock in the passed * in mode. * attempts to obtain the internal function of the LWLock lock using the atomicity of the specified mode. * * This function will not block waiting for a lock to become free-that's the * callers job. * this function does not block the process waiting for the lock to be released-this is the caller's job. * * Returns true if the lock isn't free and we need to wait. * if the lock is still not released and you still need to wait, return T * / static boolLWLockAttemptLock (LWLock * lock, LWLockMode mode) {uint32 old_state; AssertArg (mode = = LW_EXCLUSIVE | | mode = = LW_SHARED); / * Read once outside the loop, later iterations will get the newer value * via compare & exchange. * read once outside the loop, and later you can get a newer value through comparison and exchange * / old_state = pg_atomic_read_u32 (& lock- > state); / * loop until we've determined whether we could acquire the lock or not * / / Loop pointer We determine whether the lock position while (true) {uint32 desired_state; bool lock_free can be obtained. Desired_state = old_state; if (mode = = LW_EXCLUSIVE) / / exclusive {lock_free = (old_state & LW_LOCK_MASK) = = 0; if (lock_free) desired_state + = LW_VAL_EXCLUSIVE } else {/ / Nonexclusive lock_free = (old_state & LW_VAL_EXCLUSIVE) = = 0; if (lock_free) desired_state + = LW_VAL_SHARED;} / * * Attempt to swap in the state we are expecting. If we didn't see * lock to be free, that's just the old value. If we saw it as free, * we'll attempt to mark it acquired. The reason that we always swap * in the value is that this doubles as a memory barrier. We could try * to be smarter and only swap in values if we saw the lock as free, * but benchmark haven't shown it as beneficial so far. * try to exchange in the state we want. * if you do not see the lock released, this is the old value. * if the lock has been released, try to mark that the lock has been acquired. * the reason we usually swap values is that we use double memory barrier. * We try to get better: only swap locks we see released, but stress tests show no performance improvement. * * Retry if the value changed since we last looked at it. * if the value changes after the last lookup, try * / if (pg_atomic_compare_exchange_u32 (& lock- > state, & old_state, desired_state)) {if (lock_free) {/ * Great! Got the lock. * / / very good, get the lock! # ifdef LOCK_DEBUG if (mode = = LW_EXCLUSIVE) lock- > owner = MyProc;#endif return false;} else return true; / * someone also holds the lock. Somebody else has the lock * /} pg_unreachable () / / normally, the program logic should not come here} / /-WALInsertLockAcquire/* * Acquire a WAL insertion lock, for inserting to WAL. * acquire wAL insertion lock * / static voidWALInsertLockAcquire (void) {bool immed; / * * It doesn't matter which of the WAL insertion locks we acquire, so try * the one we used last time before writing WAL. If the system isn't particularly busy, it's * a good bet that it's still available, and it's good to have some * affinity to a particular lock so that you don't unnecessarily bounce * cache lines between processes when there's no contention. * it doesn't matter which WAL insertion lock we ask for, so get the last one we use. * if the system is not busy, if you are lucky, it is still available. * it is good to maintain a certain genetic relationship with a specific lock, so that if there is no contention, * you can avoid switching cache line between processes unnecessarily. * * If this is the first time through in this backend, pick a lock * (semi-) randomly. This allows the locks to be used evenly if you have a * lot of very short connections. * if this is the first acquisition by the process, acquire a lock at random. If there are many very short connections, this allows the lock to be used evenly. * / static int lockToTry =-1; if (lockToTry =-1) lockToTry = MyProc- > pgprocno% NUM_XLOGINSERT_LOCKS; MyLockNo = lockToTry; / * * The insertingAt value is initially set to 0, as we don't know our * insert location yet. * the insertingAt value is initialized to 0 because we don't know where we inserted it yet. * / immed = LWLockAcquire (& WALInsertLocks [MyLockNo] .llock, LW_EXCLUSIVE); if (! immed) {/ * * If we couldn't get the lock immediately, try another lock next * time. On a system with more insertion locks than concurrent * inserters, this causes all the inserters to eventually migrate to a * lock that no-one else is using. On a system with more inserters * than locks, it still helps to distribute the inserters evenly * across the locks. * if you can't get the lock immediately, try another lock next time. In a system where there are more insertion locks than concurrent inserters * this causes all inserters to migrate periodically to unused locks * conversely it can still help periodically distribute intruders to different locks. * / lockToTry = (lockToTry + 1)% NUM_XLOGINSERT_LOCKS;}} / /-WALInsertLockRelease/* * Release our insertion lock (or locks, if we're holding them all). * release insertion lock * * NB: Reset all variables to 0, so they cause LWLockWaitForVar to block the * next time the lock is acquired. * Note: reset all variables to 0 so that they block LWLockWaitForVar the next time it acquires the lock. * / static voidWALInsertLockRelease (void) {if (holdingAllLocks) / / if you hold all locks {int i; for (I = 0; I)

< NUM_XLOGINSERT_LOCKS; i++) LWLockReleaseClearVar(&WALInsertLocks[i].l.lock, &WALInsertLocks[i].l.insertingAt, 0); holdingAllLocks = false; } else { LWLockReleaseClearVar(&WALInsertLocks[MyLockNo].l.lock, &WALInsertLocks[MyLockNo].l.insertingAt, 0); }} /* * LWLockReleaseClearVar - release a previously acquired lock, reset variable * LWLockReleaseClearVar - 释放先前获取的锁并重置变量 */voidLWLockReleaseClearVar(LWLock *lock, uint64 *valptr, uint64 val){ LWLockWaitListLock(lock); /* * Set the variable's value before releasing the lock, that prevents race * a race condition wherein a new locker acquires the lock, but hasn't yet * set the variables value. * 在释放锁之前设置变量的值,这可以防止一个新的locker在没有设置变量值的情况下获取锁时的争用. */ *valptr = val; LWLockWaitListUnlock(lock); LWLockRelease(lock);}/** Lock the LWLock's wait list against concurrent activity.* 锁定针对并发活动的LWLock等待链表** NB: even though the wait list is locked, non-conflicting lock operations* may still happen concurrently.* 注意:虽然等待链表被锁定,非冲突锁操作仍然可能会并发出现** Time spent holding mutex should be short!* 耗费在持有mutex的时间应该尽可能的短*/static voidLWLockWaitListLock(LWLock *lock){ uint32 old_state;#ifdef LWLOCK_STATS lwlock_stats *lwstats; uint32 delays = 0; lwstats = get_lwlock_stats_entry(lock);#endif while (true) { /* always try once to acquire lock directly */ //首次尝试直接获取锁 old_state = pg_atomic_fetch_or_u32(&lock->

State, LW_FLAG_LOCKED); if (! (old_state & LW_FLAG_LOCKED)) break; / * obtained successfully; got lock * / * and then spin without atomic operations until lock is released * / then spin without atomic operation until the lock releases {SpinDelayStatus delayStatus / / SpinDelay status init_local_spin_delay (& delayStatus); / / initialize while (old_state & LW_FLAG_LOCKED) / / get Lock {perform_spin_delay (& delayStatus); old_state = pg_atomic_read_u32 (& lock- > state);} # ifdef LWLOCK_STATS delays + = delayStatus.delays # endif finish_spin_delay (& delayStatus);} / * * Retry. The lock might obviously already be re-acquired by the time * we're attempting to get it again. * retry, the lock may have been acquired by re-request when trying to get it here. * /} # ifdef LWLOCK_STATS lwstats- > spin_delay_count + = delays;// delay count # endif} / * * Unlock the LWLock's wait list. * unlock the waiting list of LWLock * * Note that it can be more efficient to manipulate flags and release the * locks in a single atomic operation. * Note that it may be more efficient to manipulate flags and release locks in a single atomic operation. * / static voidLWLockWaitListUnlock (LWLock * lock) {uint32 old_state PG_USED_FOR_ASSERTS_ONLY; old_state = pg_atomic_fetch_and_u32 (& lock- > state, ~ LW_FLAG_LOCKED); Assert (old_state & LW_FLAG_LOCKED);} / * LWLockRelease-release a previously acquired lock* LWLockRelease-release previously acquired locks * / voidLWLockRelease (LWLock * lock) {LWLockMode mode; uint32 oldstate; bool check_waiters Int I; / * * Remove lock from list of locks held. Usually, but not always, it will * be the latest-acquired lock; so search array backwards. * remove locks from the held chain list. * generally (but not always), the last requested lock is cleared, so search the array from back to front. * / for (I = num_held_lwlocks;-- I > = 0;) if (lock = = held_ lwlocks [I] .lock) break; if (I

< 0) elog(ERROR, "lock %s is not held", T_NAME(lock)); mode = held_lwlocks[i].mode;//模式 num_held_lwlocks--;//减一 for (; i < num_held_lwlocks; i++) held_lwlocks[i] = held_lwlocks[i + 1]; PRINT_LWDEBUG("LWLockRelease", lock, mode); /* * Release my hold on lock, after that it can immediately be acquired by * others, even if we still have to wakeup other waiters. * 释放"我"持有的锁, */ if (mode == LW_EXCLUSIVE) oldstate = pg_atomic_sub_fetch_u32(&lock->

State, LW_VAL_EXCLUSIVE); else oldstate = pg_atomic_sub_fetch_u32 (& lock- > state, LW_VAL_SHARED); / * nobody else can have that kind of lock * / / leave me alone! Assert (! (oldstate & LW_VAL_EXCLUSIVE)); / * We're still waiting for backends to get scheduled, don't wake them up * again. * still waiting for background processes to be scheduled. There is no need to wake them up again for the time being * / if ((oldstate & (LW_FLAG_HAS_WAITERS | LW_FLAG_RELEASE_OK)) = (LW_FLAG_HAS_WAITERS | LW_FLAG_RELEASE_OK) & & (oldstate & LW_LOCK_MASK) = = 0) check_waiters = true; else check_waiters = false / * As waking up waiters requires the spinlock to be acquired, only do so * if necessary. * because the wake-up waiters need to get the spinlock, do so only when necessary. * / if (check_waiters) {/ * XXX: remove before commit? * / XXX: clear before commit? LOG_LWDEBUG ("LWLockRelease", lock, "releasing waiters"); LWLockWakeup (lock);} TRACE_POSTGRESQL_LWLOCK_RELEASE (T_NAME (lock)); / * Now okay to allow cancel/die interrupts. * it is now possible to interrupt the operation. * / RESUME_INTERRUPTS ();} at this point, I believe you have a better understanding of "what is the processing logic related to WAL when inserting data into PostgreSQL". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report