Hi, On 2020-06-09 15:20:08 -0400, Robert Haas wrote: > If you're worried about space, an LWLock is only 16 bytes, and the > slock_t that we'd be replacing is currently at the end of the struct > so presumably followed by some padding.
I don't think the size is worth of concern in this case, and I'm not sure there's any current case where it's really worth spending effort reducing size. But if there is: It seems possible to reduce the size. struct LWLock { uint16 tranche; /* 0 2 */ /* XXX 2 bytes hole, try to pack */ pg_atomic_uint32 state; /* 4 4 */ proclist_head waiters; /* 8 8 */ /* size: 16, cachelines: 1, members: 3 */ /* sum members: 14, holes: 1, sum holes: 2 */ /* last cacheline: 16 bytes */ }; First, we could remove the tranche from the lwlock, and instead perform more work when we need to know it. Which is only when we're going to sleep, so it'd be ok if it's not that much work. Perhaps we could even defer determining the tranche to the the *read* side of the wait event (presumably that'd require making the pgstat side a bit more complicated). Second, it seems like it should be doable to reduce the size of the waiters list. We e.g. could have a separate 'array of wait lists' array in shared memory, which gets assigned to an lwlock whenever a backend wants to wait for an lwlock. The number of processes waiting for lwlocks is clearly limited by MAX_BACKENDS / 2^18-1 backends waiting, so one 4 byte integer pointing to a wait list obviously would suffice. But again, I'm not sure the current size a real problem anywhere. Greetings, Andres Freund