07.02.2025 01:26, Alexander Korotkov пишет:
> Hi!
>
> On Sun, Jan 19, 2025 at 2:11 AM Yura Sokolov <y.soko...@postgrespro.ru> wrote:
>>
>> During discussion of Increasing NUM_XLOGINSERT_LOCKS [1], Andres Freund
>> used benchmark which creates WAL records very intensively. While I this
>> it is not completely fair (1MB log records are really rare), it pushed
>> me to analyze write-side waiting of XLog machinery.
>>
>> First I tried to optimize WaitXLogInsertionsToFinish, but without great
>> success (yet).
>>
>> While profiling, I found a lot of time is spend in the memory clearing
>> under global WALBufMappingLock:
>>
>> MemSet((char *) NewPage, 0, XLOG_BLCKSZ);
>>
>> It is obvious scalability bottleneck.
>>
>> So "challenge was accepted".
>>
>> Certainly, backend should initialize pages without exclusive lock. But
>> which way to ensure pages were initialized? In other words, how to
>> ensure XLogCtl->InitializedUpTo is correct.
>>
>> I've tried to play around WALBufMappingLock with holding it for a short
>> time and spinning on XLogCtl->xlblocks[nextidx]. But in the end I found
>> WALBufMappingLock is useless at all.
>>
>> Instead of holding lock, it is better to allow backends to cooperate:
>> - I bound ConditionVariable to each xlblocks entry,
>> - every backend now checks every required block pointed by
>> InitializedUpto was successfully initialized or sleeps on its condvar,
>> - when backend sure block is initialized, it tries to update
>> InitializedUpTo with conditional variable.
>
> Looks reasonable for me, but having ConditionVariable per xlog buffer
> seems overkill for me. Find an attached revision, where I've
> implemented advancing InitializedUpTo without ConditionVariable.
> After initialization of each buffer there is attempt to do CAS for
> InitializedUpTo in a loop. So, multiple processes will try to advance
> InitializedUpTo, they could hijack initiative from each other, but
> there is always a leader which will finish the work.
>
> There is only one ConditionVariable to wait for InitializedUpTo being
> advanced.
>
> I didn't benchmark my version, just checked that tests passed.
Good day, Alexander.
I've got mixed but quite close result for both approaches (single or many
ConditionVariable) on the notebook. Since I have no access to larger
machine, I can't prove "many" is way better (or discover it worse).
Given patch after cleanup looks a bit smaller and clearer, I agree to keep
just single condition variable.
Cleaned version is attached.
I've changed condition for broadcast a bit ("less" instead "not equal"):
- buffer's border may already go into future,
- and then other backend will reach not yet initialized buffer and will
broadcast.
-------
regards
Yura Sokolov aka funny-falcon
From 709ef74a8424fe626e2a2170eb9a8a1493e23cb6 Mon Sep 17 00:00:00 2001
From: Yura Sokolov <y.soko...@postgrespro.ru>
Date: Sat, 18 Jan 2025 23:50:09 +0300
Subject: [PATCH v2 1/2] Get rid of WALBufMappingLock
Allow many backends to concurrently initialize XLog buffers.
This way `MemSet((char *) NewPage, 0, XLOG_BLCKSZ);` is not under single
LWLock in exclusive mode.
Algorithm:
- backend first reserves page for initialization,
- then it ensures it was written out,
- this it initialized it and signals concurrent initializers using
ConditionVariable,
- when enough pages reserved for initialization for this backend, it
ensures all required pages completes initialization.
Many backends concurrently reserve pages, initialize them, and advance
XLogCtl->InitializedUpTo to point latest initialized page.
---
src/backend/access/transam/xlog.c | 144 +++++++++++-------
.../utils/activity/wait_event_names.txt | 2 +-
src/include/storage/lwlocklist.h | 2 +-
3 files changed, 90 insertions(+), 58 deletions(-)
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 9c270e7d466..c4b80ede5da 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -302,11 +302,6 @@ static bool doPageWrites;
* so it's a plain spinlock. The other locks are held longer (potentially
* over I/O operations), so we use LWLocks for them. These locks are:
*
- * WALBufMappingLock: must be held to replace a page in the WAL buffer cache.
- * It is only held while initializing and changing the mapping. If the
- * contents of the buffer being replaced haven't been written yet, the mapping
- * lock is released while the write is done, and reacquired afterwards.
- *
* WALWriteLock: must be held to write WAL buffers to disk (XLogWrite or
* XLogFlush).
*
@@ -472,22 +467,33 @@ typedef struct XLogCtlData
pg_atomic_uint64 logWriteResult; /* last byte + 1 written out */
pg_atomic_uint64 logFlushResult; /* last byte + 1 flushed */
+ /*
+ * Latest initialized or reserved for inititalization page in the cache
+ * (last byte position + 1).
+ *
+ * It should be advanced before identity of a buffer will be changed to.
+ * To change the identity of a buffer that's still dirty, the old page
+ * needs to be written out first, and for that you need WALWriteLock, and
+ * you need to ensure that there are no in-progress insertions to the page
+ * by calling WaitXLogInsertionsToFinish().
+ */
+ pg_atomic_uint64 InitializeReserved;
+
/*
* Latest initialized page in the cache (last byte position + 1).
*
- * To change the identity of a buffer (and InitializedUpTo), you need to
- * hold WALBufMappingLock. To change the identity of a buffer that's
- * still dirty, the old page needs to be written out first, and for that
- * you need WALWriteLock, and you need to ensure that there are no
- * in-progress insertions to the page by calling
- * WaitXLogInsertionsToFinish().
+ * It is updated to successfully initialized buffer's identities.
*/
- XLogRecPtr InitializedUpTo;
+ pg_atomic_uint64 InitializedUpTo;
+
+ /* Notification for update of InitializedUpTo. */
+ ConditionVariable InitializedUpToCondVar;
/*
* These values do not change after startup, although the pointed-to pages
- * and xlblocks values certainly do. xlblocks values are protected by
- * WALBufMappingLock.
+ * and xlblocks values certainly do. xlblocks values are changed
+ * lock-free with cooperation with InitializeReserved+InitializedUpTo and
+ * check for write position.
*/
char *pages; /* buffers for unwritten XLOG pages */
pg_atomic_uint64 *xlblocks; /* 1st byte ptr-s + XLOG_BLCKSZ */
@@ -810,9 +816,9 @@ XLogInsertRecord(XLogRecData *rdata,
* fullPageWrites from changing until the insertion is finished.
*
* Step 2 can usually be done completely in parallel. If the required WAL
- * page is not initialized yet, you have to grab WALBufMappingLock to
- * initialize it, but the WAL writer tries to do that ahead of insertions
- * to avoid that from happening in the critical path.
+ * page is not initialized yet, you have to go through AdvanceXLInsertBuffer,
+ * which will ensure it is initialized. But the WAL writer tries to do that
+ * ahead of insertions to avoid that from happening in the critical path.
*
*----------
*/
@@ -1991,32 +1997,43 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, TimeLineID tli, bool opportunistic)
XLogRecPtr NewPageEndPtr = InvalidXLogRecPtr;
XLogRecPtr NewPageBeginPtr;
XLogPageHeader NewPage;
+ XLogRecPtr ReservedPtr;
int npages pg_attribute_unused() = 0;
- LWLockAcquire(WALBufMappingLock, LW_EXCLUSIVE);
-
- /*
- * Now that we have the lock, check if someone initialized the page
- * already.
- */
- while (upto >= XLogCtl->InitializedUpTo || opportunistic)
+ /* Try to initialize pages we need in WAL buffer. */
+ ReservedPtr = pg_atomic_read_u64(&XLogCtl->InitializeReserved);
+ while (upto >= ReservedPtr || opportunistic)
{
- nextidx = XLogRecPtrToBufIdx(XLogCtl->InitializedUpTo);
-
/*
- * Get ending-offset of the buffer page we need to replace (this may
- * be zero if the buffer hasn't been used yet). Fall through if it's
- * already written out.
+ * Get ending-offset of the buffer page we need to replace.
+ *
+ * We don't lookup into xlblocks, but rather calculate position we
+ * must wait to be written. If it was written, xlblocks will have this
+ * position (or uninitialized)
*/
- OldPageRqstPtr = pg_atomic_read_u64(&XLogCtl->xlblocks[nextidx]);
- if (LogwrtResult.Write < OldPageRqstPtr)
+ if (ReservedPtr + XLOG_BLCKSZ > XLOG_BLCKSZ * XLOGbuffers)
+ OldPageRqstPtr = ReservedPtr + XLOG_BLCKSZ - XLOG_BLCKSZ * XLOGbuffers;
+ else
+ OldPageRqstPtr = InvalidXLogRecPtr;
+
+ if (LogwrtResult.Write < OldPageRqstPtr && opportunistic)
{
/*
- * Nope, got work to do. If we just want to pre-initialize as much
- * as we can without flushing, give up now.
+ * If we just want to pre-initialize as much as we can without
+ * flushing, give up now.
*/
- if (opportunistic)
- break;
+ upto = ReservedPtr - 1;
+ break;
+ }
+
+ /* Actually reserve the page for initialization. */
+ if (!pg_atomic_compare_exchange_u64(&XLogCtl->InitializeReserved, &ReservedPtr, ReservedPtr + XLOG_BLCKSZ))
+ continue;
+
+ /* Fall through if it's already written out. */
+ if (LogwrtResult.Write < OldPageRqstPtr)
+ {
+ /* Nope, got work to do. */
/* Advance shared memory write request position */
SpinLockAcquire(&XLogCtl->info_lck);
@@ -2031,14 +2048,6 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, TimeLineID tli, bool opportunistic)
RefreshXLogWriteResult(LogwrtResult);
if (LogwrtResult.Write < OldPageRqstPtr)
{
- /*
- * Must acquire write lock. Release WALBufMappingLock first,
- * to make sure that all insertions that we need to wait for
- * can finish (up to this same position). Otherwise we risk
- * deadlock.
- */
- LWLockRelease(WALBufMappingLock);
-
WaitXLogInsertionsToFinish(OldPageRqstPtr);
LWLockAcquire(WALWriteLock, LW_EXCLUSIVE);
@@ -2060,9 +2069,6 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, TimeLineID tli, bool opportunistic)
PendingWalStats.wal_buffers_full++;
TRACE_POSTGRESQL_WAL_BUFFER_WRITE_DIRTY_DONE();
}
- /* Re-acquire WALBufMappingLock and retry */
- LWLockAcquire(WALBufMappingLock, LW_EXCLUSIVE);
- continue;
}
}
@@ -2070,19 +2076,26 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, TimeLineID tli, bool opportunistic)
* Now the next buffer slot is free and we can set it up to be the
* next output page.
*/
- NewPageBeginPtr = XLogCtl->InitializedUpTo;
+ NewPageBeginPtr = ReservedPtr;
NewPageEndPtr = NewPageBeginPtr + XLOG_BLCKSZ;
+ nextidx = XLogRecPtrToBufIdx(ReservedPtr);
- Assert(XLogRecPtrToBufIdx(NewPageBeginPtr) == nextidx);
+#ifdef USE_ASSERT_CHECKING
+ {
+ XLogRecPtr storedBound = pg_atomic_read_u64(&XLogCtl->xlblocks[nextidx]);
+
+ Assert(storedBound == OldPageRqstPtr || storedBound == InvalidXLogRecPtr);
+ }
+#endif
NewPage = (XLogPageHeader) (XLogCtl->pages + nextidx * (Size) XLOG_BLCKSZ);
/*
- * Mark the xlblock with InvalidXLogRecPtr and issue a write barrier
- * before initializing. Otherwise, the old page may be partially
- * zeroed but look valid.
+ * Mark the xlblock with (InvalidXLogRecPtr+1) and issue a write
+ * barrier before initializing. Otherwise, the old page may be
+ * partially zeroed but look valid.
*/
- pg_atomic_write_u64(&XLogCtl->xlblocks[nextidx], InvalidXLogRecPtr);
+ pg_atomic_write_u64(&XLogCtl->xlblocks[nextidx], InvalidXLogRecPtr + 1);
pg_write_barrier();
/*
@@ -2139,11 +2152,25 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, TimeLineID tli, bool opportunistic)
pg_write_barrier();
pg_atomic_write_u64(&XLogCtl->xlblocks[nextidx], NewPageEndPtr);
- XLogCtl->InitializedUpTo = NewPageEndPtr;
+
+ while (pg_atomic_compare_exchange_u64(&XLogCtl->InitializedUpTo, &NewPageBeginPtr, NewPageEndPtr))
+ {
+ NewPageBeginPtr = NewPageEndPtr;
+ nextidx = XLogRecPtrToBufIdx(NewPageBeginPtr);
+ NewPageEndPtr = pg_atomic_read_u64(&XLogCtl->xlblocks[nextidx]);
+
+ if (NewPageEndPtr < NewPageBeginPtr + XLOG_BLCKSZ)
+ ConditionVariableBroadcast(&XLogCtl->InitializedUpToCondVar);
+ if (NewPageEndPtr != NewPageBeginPtr + XLOG_BLCKSZ)
+ break;
+ }
npages++;
}
- LWLockRelease(WALBufMappingLock);
+
+ while (upto >= pg_atomic_read_u64(&XLogCtl->InitializedUpTo))
+ ConditionVariableSleep(&XLogCtl->InitializedUpToCondVar, WAIT_EVENT_WAL_BUFFER_INIT);
+ ConditionVariableCancelSleep();
#ifdef WAL_DEBUG
if (XLOG_DEBUG && npages > 0)
@@ -5044,6 +5071,10 @@ XLOGShmemInit(void)
pg_atomic_init_u64(&XLogCtl->logWriteResult, InvalidXLogRecPtr);
pg_atomic_init_u64(&XLogCtl->logFlushResult, InvalidXLogRecPtr);
pg_atomic_init_u64(&XLogCtl->unloggedLSN, InvalidXLogRecPtr);
+
+ pg_atomic_init_u64(&XLogCtl->InitializeReserved, InvalidXLogRecPtr);
+ pg_atomic_init_u64(&XLogCtl->InitializedUpTo, InvalidXLogRecPtr);
+ ConditionVariableInit(&XLogCtl->InitializedUpToCondVar);
}
/*
@@ -6063,7 +6094,7 @@ StartupXLOG(void)
memset(page + len, 0, XLOG_BLCKSZ - len);
pg_atomic_write_u64(&XLogCtl->xlblocks[firstIdx], endOfRecoveryInfo->lastPageBeginPtr + XLOG_BLCKSZ);
- XLogCtl->InitializedUpTo = endOfRecoveryInfo->lastPageBeginPtr + XLOG_BLCKSZ;
+ pg_atomic_write_u64(&XLogCtl->InitializedUpTo, endOfRecoveryInfo->lastPageBeginPtr + XLOG_BLCKSZ);
}
else
{
@@ -6072,8 +6103,9 @@ StartupXLOG(void)
* let the first attempt to insert a log record to initialize the next
* buffer.
*/
- XLogCtl->InitializedUpTo = EndOfLog;
+ pg_atomic_write_u64(&XLogCtl->InitializedUpTo, EndOfLog);
}
+ pg_atomic_write_u64(&XLogCtl->InitializeReserved, pg_atomic_read_u64(&XLogCtl->InitializedUpTo));
/*
* Update local and shared status. This is OK to do without any locks
diff --git a/src/backend/utils/activity/wait_event_names.txt b/src/backend/utils/activity/wait_event_names.txt
index e199f071628..ccf73781d81 100644
--- a/src/backend/utils/activity/wait_event_names.txt
+++ b/src/backend/utils/activity/wait_event_names.txt
@@ -155,6 +155,7 @@ REPLICATION_SLOT_DROP "Waiting for a replication slot to become inactive so it c
RESTORE_COMMAND "Waiting for <xref linkend="guc-restore-command"/> to complete."
SAFE_SNAPSHOT "Waiting to obtain a valid snapshot for a <literal>READ ONLY DEFERRABLE</literal> transaction."
SYNC_REP "Waiting for confirmation from a remote server during synchronous replication."
+WAL_BUFFER_INIT "Waiting on WAL buffer to be initialized."
WAL_RECEIVER_EXIT "Waiting for the WAL receiver to exit."
WAL_RECEIVER_WAIT_START "Waiting for startup process to send initial data for streaming replication."
WAL_SUMMARY_READY "Waiting for a new WAL summary to be generated."
@@ -310,7 +311,6 @@ XidGen "Waiting to allocate a new transaction ID."
ProcArray "Waiting to access the shared per-process data structures (typically, to get a snapshot or report a session's transaction ID)."
SInvalRead "Waiting to retrieve messages from the shared catalog invalidation queue."
SInvalWrite "Waiting to add a message to the shared catalog invalidation queue."
-WALBufMapping "Waiting to replace a page in WAL buffers."
WALWrite "Waiting for WAL buffers to be written to disk."
ControlFile "Waiting to read or update the <filename>pg_control</filename> file or create a new WAL file."
MultiXactGen "Waiting to read or update shared multixact state."
diff --git a/src/include/storage/lwlocklist.h b/src/include/storage/lwlocklist.h
index cf565452382..ff897515769 100644
--- a/src/include/storage/lwlocklist.h
+++ b/src/include/storage/lwlocklist.h
@@ -37,7 +37,7 @@ PG_LWLOCK(3, XidGen)
PG_LWLOCK(4, ProcArray)
PG_LWLOCK(5, SInvalRead)
PG_LWLOCK(6, SInvalWrite)
-PG_LWLOCK(7, WALBufMapping)
+/* 7 was WALBufMapping */
PG_LWLOCK(8, WALWrite)
PG_LWLOCK(9, ControlFile)
/* 10 was CheckpointLock */
--
2.43.0
From 0ec1841eace0bf108e1f07e882e0da9c78e464a0 Mon Sep 17 00:00:00 2001
From: Yura Sokolov <y.soko...@postgrespro.ru>
Date: Thu, 16 Jan 2025 15:30:57 +0300
Subject: [PATCH v2 2/2] several attempts to lock WALInsertLocks
---
src/backend/access/transam/xlog.c | 47 ++++++++++++++++++-------------
1 file changed, 28 insertions(+), 19 deletions(-)
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index c4b80ede5da..8f6fd77aac4 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -68,6 +68,7 @@
#include "catalog/pg_database.h"
#include "common/controldata_utils.h"
#include "common/file_utils.h"
+#include "common/pg_prng.h"
#include "executor/instrument.h"
#include "miscadmin.h"
#include "pg_trace.h"
@@ -1376,8 +1377,7 @@ CopyXLogRecordToWAL(int write_len, bool isLogSwitch, XLogRecData *rdata,
static void
WALInsertLockAcquire(void)
{
- bool immed;
-
+ int attempts = 2;
/*
* It doesn't matter which of the WAL insertion locks we acquire, so try
* the one we used last time. If the system isn't particularly busy, it's
@@ -1389,29 +1389,38 @@ WALInsertLockAcquire(void)
* (semi-)randomly. This allows the locks to be used evenly if you have a
* lot of very short connections.
*/
- static int lockToTry = -1;
+ static uint32 lockToTry = 0;
+ static uint32 lockDelta = 0;
- if (lockToTry == -1)
- lockToTry = MyProcNumber % NUM_XLOGINSERT_LOCKS;
- MyLockNo = lockToTry;
+ if (lockDelta == 0)
+ {
+ uint32 rng = pg_prng_uint32(&pg_global_prng_state);
+
+ lockToTry = rng % NUM_XLOGINSERT_LOCKS;
+ lockDelta = ((rng >> 16) % NUM_XLOGINSERT_LOCKS) | 1; /* must be odd */
+ }
/*
* The insertingAt value is initially set to 0, as we don't know our
* insert location yet.
*/
- immed = LWLockAcquire(&WALInsertLocks[MyLockNo].l.lock, LW_EXCLUSIVE);
- if (!immed)
- {
- /*
- * If we couldn't get the lock immediately, try another lock next
- * time. On a system with more insertion locks than concurrent
- * inserters, this causes all the inserters to eventually migrate to a
- * lock that no-one else is using. On a system with more inserters
- * than locks, it still helps to distribute the inserters evenly
- * across the locks.
- */
- lockToTry = (lockToTry + 1) % NUM_XLOGINSERT_LOCKS;
- }
+ MyLockNo = lockToTry;
+retry:
+ if (LWLockConditionalAcquire(&WALInsertLocks[MyLockNo].l.lock, LW_EXCLUSIVE))
+ return;
+ /*
+ * If we couldn't get the lock immediately, try another lock next
+ * time. On a system with more insertion locks than concurrent
+ * inserters, this causes all the inserters to eventually migrate to a
+ * lock that no-one else is using. On a system with more inserters
+ * than locks, it still helps to distribute the inserters evenly
+ * across the locks.
+ */
+ lockToTry = (lockToTry + lockDelta) % NUM_XLOGINSERT_LOCKS;
+ MyLockNo = lockToTry;
+ if (--attempts)
+ goto retry;
+ LWLockAcquire(&WALInsertLocks[MyLockNo].l.lock, LW_EXCLUSIVE);
}
/*
--
2.43.0