On Tue, May 23, 2023 at 08:24:45PM +0000, John Morris wrote:
> This is a short patch which cleans up code for unlogged LSNs. It
> replaces the existing spinlock with atomic ops.  It could provide a
> performance benefit for future uses of unlogged LSNS,  but for now
> it is simply a cleaner implementation.

Seeing the code paths where gistGetFakeLSN() is called, I guess that
the answer will be no, still are you seeing a measurable difference in
some cases?

-   /* increment the unloggedLSN counter, need SpinLock */
-   SpinLockAcquire(&XLogCtl->ulsn_lck);
-   nextUnloggedLSN = XLogCtl->unloggedLSN++;
-   SpinLockRelease(&XLogCtl->ulsn_lck);
-
-   return nextUnloggedLSN;
+   return pg_atomic_fetch_add_u64(&XLogCtl->unloggedLSN, 1);

Spinlocks provide a full memory barrier, which may not the case with
add_u64() or read_u64().  Could memory ordering be a problem in these
code paths?
--
Michael

Attachment: signature.asc
Description: PGP signature

Reply via email to