Hi, Alvaro! Thank you for your feedback.
On Wed, Apr 3, 2024 at 9:58 AM Alvaro Herrera <alvhe...@alvh.no-ip.org> wrote: > Hello, I noticed that commit 06c418e163e9 uses waitLSN->mutex (a > spinlock) to protect the contents of waitLSN -- and it's used to walk an > arbitrary long list of processes waiting ... and also, an arbitrary > number of processes could be calling this code. I think using a > spinlock for this is unwise, as it'll cause busy-waiting whenever > there's contention. Wouldn't it be better to use an LWLock for this? > Then the processes would sleep until the lock is freed. > > While nosing about the code, other things struck me: > > I think there should be more comments about WaitLSNProcInfo and > WaitLSNState in waitlsn.h. > > In addLSNWaiter it'd be better to assign 'cur' before acquiring the > lock. > > Is a plan array really the most efficient data structure for this, > considering that you have to reorder each time you add an element? > Maybe it is, but wouldn't it make sense to use memmove() when adding one > element rather iterating all the remaining elements to the end of the > queue? > > I think the include list in waitlsn.c could be tightened a bit: I've just pushed commit, which shortens the include list and fixes the order of 'cur' assigning and taking spinlock in addLSNWaiter(). Regarding the shmem data structure for LSN waiters. I didn't pick LWLock or ConditionVariable, because I needed the ability to wake up only those waiters whose LSN is already replayed. In my experience waking up a process is way slower than scanning a short flat array. However, I agree that when the number of waiters is very high and flat array may become a problem. It seems that the pairing heap is not hard to use for shmem structures. The only memory allocation call in paritingheap.c is in pairingheap_allocate(). So, it's only needed to be able to initialize the pairing heap in-place, and it will be fine for shmem. I'll come back with switching to the pairing heap shortly. ------ Regards, Alexander Korotkov