Hi, Alexander!

On Wed, 3 Apr 2024 at 22:18, Alexander Korotkov <aekorot...@gmail.com>
wrote:

> On Wed, Apr 3, 2024 at 7:55 PM Alvaro Herrera <alvhe...@alvh.no-ip.org>
> wrote:
> >
> > On 2024-Apr-03, Alexander Korotkov wrote:
> >
> > > Regarding the shmem data structure for LSN waiters.  I didn't pick
> > > LWLock or ConditionVariable, because I needed the ability to wake up
> > > only those waiters whose LSN is already replayed.  In my experience
> > > waking up a process is way slower than scanning a short flat array.
> >
> > I agree, but I think that's unrelated to what I was saying, which is
> > just the patch I attach here.
>
> Oh, sorry for the confusion.  I'd re-read your message.  Indeed you
> meant this very clearly!
>
> I'm good with the patch.  Attached revision contains a bit of a commit
> message.
>
> > > However, I agree that when the number of waiters is very high and flat
> > > array may become a problem.  It seems that the pairing heap is not
> > > hard to use for shmem structures.  The only memory allocation call in
> > > paritingheap.c is in pairingheap_allocate().  So, it's only needed to
> > > be able to initialize the pairing heap in-place, and it will be fine
> > > for shmem.
> >
> > Ok.
> >
> > With the code as it stands today, everything in WaitLSNState apart from
> > the pairing heap is accessed without any locking.  I think this is at
> > least partly OK because each backend only accesses its own entry; but it
> > deserves a comment.  Or maybe something more, because WaitLSNSetLatches
> > does modify the entry for other backends.  (Admittedly, this could only
> > happens for backends that are already sleeping, and it only happens
> > with the lock acquired, so it's probably okay.  But clearly it deserves
> > a comment.)
>
> Please, check 0002 patch attached.  I found it easier to move two
> assignments we previously moved out of lock, into the lock; then claim
> WaitLSNState.procInfos is also protected by WaitLSNLock.
>
Could you re-attach 0002. Seems it failed to attach to the previous
message.

Regards,
Pavel

Reply via email to