On Tue, Jun 9, 2020 at 8:12 PM Andres Freund <and...@anarazel.de> wrote:
> I don't think the size is worth of concern in this case, and I'm not
> sure there's any current case where it's really worth spending effort
> reducing size. But if there is: It seems possible to reduce the size.

Yeah, I don't think it's very important.

> First, we could remove the tranche from the lwlock, and instead perform
> more work when we need to know it. Which is only when we're going to
> sleep, so it'd be ok if it's not that much work. Perhaps we could even
> defer determining the tranche to the the *read* side of the wait event
> (presumably that'd require making the pgstat side a bit more
> complicated).
>
> Second, it seems like it should be doable to reduce the size of the
> waiters list. We e.g. could have a separate 'array of wait lists' array
> in shared memory, which gets assigned to an lwlock whenever a backend
> wants to wait for an lwlock. The number of processes waiting for lwlocks
> is clearly limited by MAX_BACKENDS / 2^18-1 backends waiting, so one 4
> byte integer pointing to a wait list obviously would suffice.
>
> But again, I'm not sure the current size a real problem anywhere.

Honestly, both of these sound more painful than it's worth. We're not
likely to have enough LWLocks that using 16 bytes for each one rather
than 8 is a major problem. With regard to the first of these ideas,
bear in mind that the LWLock might be in a DSM segment that the reader
doesn't have mapped.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Reply via email to