On Thu, Sep 04, 2025 at 12:30:27PM -0500, Sami Imseih wrote:
> I liked removing the repalloc calls inside this routine and did not think
> it was worth optimizing. I am OK with reverting it back. Although v1
> is incorrect since it's still initializing
> NamedLWLockTrancheRequestArray to MAX_NAMED_
On Thu, Sep 04, 2025 at 03:29:52PM +0530, Rahila Syed wrote:
> Since updates to LocalLWLockCounter and LWLockTrancheNames are performed
> while holding the lock, but reading LocalLWLockCounter and then
> LWLockTrancheNames in GetLWTrancheName can occur without acquiring the
> same lock in case tran
> I'm having some regrets about the changes to RequestNamedLWLockTranche().
> Specifically, when it is first called, it immediately allocates an array
> big enough to hold 256 requests (~17 KB), whereas it used to only allocate
> space for 16 requests (~1 KB) and resize as needed.
I liked removing
On Wed, Sep 03, 2025 at 02:01:14PM -0500, Nathan Bossart wrote:
> Committed.
I'm having some regrets about the changes to RequestNamedLWLockTranche().
Specifically, when it is first called, it immediately allocates an array
big enough to hold 256 requests (~17 KB), whereas it used to only allocate
Hi Nathan,
> We only ever add new tranche names to the end of the list, so we should be
> able to avoid the lock for the vast majority of tranche name lookups. We
> hold the lock while we increment LWLockCounter and add a name, and also
> whenever we update LocalLWLockCounter. This should preve
Committed.
--
nathan
On Mon, Sep 01, 2025 at 06:35:55PM +0530, Rahila Syed wrote:
> Please see below for some comments regarding v20 patch.
Thanks for looking.
> 1. Following unused declaration is present in the lwlock.h file.
>
> extern void LWLockRegisterTranche(int tranche_id, const char *tranche_name);
Whoops,
On Mon, Sep 01, 2025 at 10:18:46AM +, Bertrand Drouvot wrote:
> Changes look good.
Thanks for looking.
> Not directly related, but I think that we can get rid of:
>
> size = add_size(size, LWLOCK_PADDED_SIZE);
>
> in LWLockShmemSize() and of:
>
> ptr += LWLOCK_PADDED_SIZE - ((uintptr_t) pt
Hi,
> I've also attached a rebased patch that addresses all the latest feedback.
> A reworked verison of the test patch is also included, but that's mostly
> intended for CI purposes and is still not intended for commit (yet).
>
>
Please see below for some comments regarding v20 patch.
1. Follow
Hi,
On Sat, Aug 30, 2025 at 09:14:46AM -0500, Nathan Bossart wrote:
> On Fri, Aug 29, 2025 at 09:51:38PM -0500, Nathan Bossart wrote:
> > I've also attached a rebased patch that addresses all the latest feedback.
> > A reworked verison of the test patch is also included, but that's mostly
> > inte
>> I've also attached a rebased patch that addresses all the latest feedback.
>> A reworked verison of the test patch is also included, but that's mostly
>> intended for CI purposes and is still not intended for commit (yet).
> And here's an attempt at fixing the alignment problems revealed by cfb
On Fri, Aug 29, 2025 at 09:51:38PM -0500, Nathan Bossart wrote:
> I've also attached a rebased patch that addresses all the latest feedback.
> A reworked verison of the test patch is also included, but that's mostly
> intended for CI purposes and is still not intended for commit (yet).
And here's
I've committed the DSM registry changes. I apologize for the lackluster
commit message. My attempts at including all the details ended up being
super wordy and hard to read, so I decided to keep it terse and leave out
some of the context.
I've also attached a rebased patch that addresses all the
On Thu, Aug 28, 2025 at 05:53:23PM -0500, Sami Imseih wrote:
>> I think this patch set will require reworking the "GetNamedLWLockTranche
>> crashes on Windows in normal backend" patch [0], but AFAICT we can easily
>> adjust it to scan through NamedLWLockTrancheNames instead.
>
> Except, we will ne
> On Thu, Aug 28, 2025 at 05:53:23PM -0500, Sami Imseih wrote:
> > Just a few things that were discussed earlier, that I incorporated now.
> >
> > 1/ We should be checking that tranche_name is NOT NULL when
> > LWLockNewTrancheId or RequestNamedLWLockTranche is called.
>
> Right, if not strlen() do
Hi,
On Thu, Aug 28, 2025 at 05:53:23PM -0500, Sami Imseih wrote:
> Just a few things that were discussed earlier, that I incorporated now.
>
> 1/ We should be checking that tranche_name is NOT NULL when
> LWLockNewTrancheId or RequestNamedLWLockTranche is called.
Right, if not strlen() does segf
> * I modified the array of tranche names to be a char ** to simplify
> lookups. I also changed how it is first initialized in CreateLWLocks() a
> bit.
That works also.
> * I've left out the tests for now. Those are great for the development
> phase, but I'm not completely sold on committ
I've spent some time getting this prepared for commit, so apologies if I am
steamrolling over some of the latest discussion points. The majority of
what I've changed amounts to what I'd characterize as relatively minor
editorialization, but I'd imagine reasonable people could disagree. The
bigger
> The check has to be done before the strlen() call, if not it segfault:
I don't know what I was thinking there :( silly mistake.
It was also missed in RequestNamedLWLockTranche.
> Most of the places where NAMEDATALEN is mentioned in sgml files also mention
> it's 64 bytes. Should we do the same
Hi,
On Wed, Aug 27, 2025 at 02:13:39PM -0500, Sami Imseih wrote:
> Thanks for reviewing!
>
> > === 1
> >
> > We need to check if tranche_name is NULL and report an error if that's the
> > case.
> > If not, strlen() would segfault.
>
> Added an error. Good call. The error message follows previou
Thanks for reviewing!
> === 1
>
> We need to check if tranche_name is NULL and report an error if that's the
> case.
> If not, strlen() would segfault.
Added an error. Good call. The error message follows previously used
convention.
```
+ if (!tranche_name)
+ elog(ERROR, "tr
Hi,
On Tue, Aug 26, 2025 at 05:50:34PM -0500, Sami Imseih wrote:
> fixed the issues mentioned above in v13.
>
> > We probably need to do the sprintf/strcpy before LWLockNewTrancheId().
> > Also, I'm thinking we should just use the same tranche for both the DSA and
> > the dshash table [0] to evad
> > On Mon, Aug 25, 2025 at 04:59:41PM -0500, Sami Imseih wrote:
> > > hmm, can we really avoid a shared lock when reading from shared memory?
> > > considering access for both reads and writes can be concurrent to shared
> > > memory. We are also taking an exclusive lock when writing a new tranche
fixed the issues mentioned above in v13.
> We probably need to do the sprintf/strcpy before LWLockNewTrancheId().
> Also, I'm thinking we should just use the same tranche for both the DSA and
> the dshash table [0] to evade the DSMR_DSA_TRANCHE_SUFFIX problem, i.e.,
> those tranche names potential
On Tue, Aug 26, 2025 at 02:56:22PM -0500, Sami Imseih wrote:
> Here is v12 that replaces the LWLock to access the shared memory with a
> ShmemLock and implements a local counter.
This looks much closer to what I was imagining.
/* Initialize the LWLock tranche for the DSA. */
-
> On Mon, Aug 25, 2025 at 04:59:41PM -0500, Sami Imseih wrote:
> > hmm, can we really avoid a shared lock when reading from shared memory?
> > considering access for both reads and writes can be concurrent to shared
> > memory. We are also taking an exclusive lock when writing a new tranche.
>
> We
On Mon, Aug 25, 2025 at 04:59:41PM -0500, Sami Imseih wrote:
> hmm, can we really avoid a shared lock when reading from shared memory?
> considering access for both reads and writes can be concurrent to shared
> memory. We are also taking an exclusive lock when writing a new tranche.
We probably w
> On Mon, Aug 25, 2025 at 04:37:21PM -0500, Sami Imseih wrote:
> > When we lookup from shared array only, we need to take a shared lock
> > every lookup. Acquiring that lock is what I am trying to avoid. You
> > are saying it's not worth optimizing that part, correct?
>
> Why do we need a shared lo
On Mon, Aug 25, 2025 at 04:37:21PM -0500, Sami Imseih wrote:
> When we lookup from shared array only, we need to take a shared lock
> every lookup. Acquiring that lock is what I am trying to avoid. You
> are saying it's not worth optimizing that part, correct?
Why do we need a shared lock here? I
> On Fri, Aug 22, 2025 at 03:01:53PM -0500, Sami Imseih wrote:
> > I kept the local array to serve consecutive reads and to avoid having to
> > take a shared lock on shared memory every time GetLWTrancheName is
> > called. A new LWLock to protect this array is required.
>
> I'm not seeing why we ne
On Fri, Aug 22, 2025 at 03:01:53PM -0500, Sami Imseih wrote:
> I kept the local array to serve consecutive reads and to avoid having to
> take a shared lock on shared memory every time GetLWTrancheName is
> called. A new LWLock to protect this array is required.
I'm not seeing why we need this cac
On Fri, Aug 22, 2025 at 05:33:55PM -0400, Tom Lane wrote:
> Sami Imseih writes:
>> One point I did not make earlier is that the tranche name lengths will
>> need to be as long as we allow in dsm_registry.c.
>
>> #define DSMR_NAME_LEN 128
>
> Huh. Why is that different from NAMEDATALEN in the fi
Sami Imseih writes:
> One point I did not make earlier is that the tranche name lengths will
> need to be as long as we allow in dsm_registry.c.
> #define DSMR_NAME_LEN 128
Huh. Why is that different from NAMEDATALEN in the first place?
regards, tom lane
On Fri, Aug 22, 2025 at 3:01 PM Sami Imseih wrote:
>
> >> If there is agreement on setting limits, may I propose
> >> 1024 tranches and NAMEDATALEN. Both seem reasonably sufficient.
>
> > Let's proceed with that approach for now. We can worry about the exact
> > limits once this is closer to comm
>> If there is agreement on setting limits, may I propose
>> 1024 tranches and NAMEDATALEN. Both seem reasonably sufficient.
> Let's proceed with that approach for now. We can worry about the exact
> limits once this is closer to commit.
v11 implements the fixed size shared memory as discussed.
On Tue, Aug 19, 2025 at 04:16:26PM -0500, Sami Imseih wrote:
> If there is agreement on setting limits, may I propose
> 1024 tranches and NAMEDATALEN. Both seem reasonably sufficient.
Let's proceed with that approach for now. We can worry about the exact
limits once this is closer to commit.
--
> On Tue, Aug 19, 2025 at 03:52:33PM -0500, Sami Imseih wrote:
> > If we limit the tranche name to NAMEDATALEN and also limit the
> > number of tranches an extension can register, we can put this
> > all in static shared memory (We would still need to have a backend local
> > cache to allow lookups
On Tue, Aug 19, 2025 at 03:52:33PM -0500, Sami Imseih wrote:
> If we limit the tranche name to NAMEDATALEN and also limit the
> number of tranches an extension can register, we can put this
> all in static shared memory (We would still need to have a backend local
> cache to allow lookups to avoid
> Nathan Bossart writes:
> > On Tue, Aug 19, 2025 at 02:37:19PM -0400, Andres Freund wrote:
> >> Sure, but we don't need to support a large number of tranches. Just make
> >> it,
> >> idk, 128 entries long. Adding a dynamically allocated dsm to every server
> >> seems like a waste - ever shared m
Nathan Bossart writes:
> On Tue, Aug 19, 2025 at 02:37:19PM -0400, Andres Freund wrote:
>> Sure, but we don't need to support a large number of tranches. Just make it,
>> idk, 128 entries long. Adding a dynamically allocated dsm to every server
>> seems like a waste - ever shared mapping makes for
On Tue, Aug 19, 2025 at 02:37:19PM -0400, Andres Freund wrote:
> On 2025-08-19 13:31:35 -0500, Nathan Bossart wrote:
>> On Tue, Aug 19, 2025 at 02:06:50PM -0400, Andres Freund wrote:
>> > Possibly stupid question - is it really worth having a dynamic structure
>> > here?
>> > The number of tranche
Hi,
On 2025-08-19 13:31:35 -0500, Nathan Bossart wrote:
> On Tue, Aug 19, 2025 at 02:06:50PM -0400, Andres Freund wrote:
> > Possibly stupid question - is it really worth having a dynamic structure
> > here?
> > The number of tranches is strictly bound, it seems like it'd be simpler to
> > have a
On Tue, Aug 19, 2025 at 02:06:50PM -0400, Andres Freund wrote:
> Possibly stupid question - is it really worth having a dynamic structure here?
> The number of tranches is strictly bound, it seems like it'd be simpler to
> have an array of tranch nmes in shared memory.
Tranches can be allocated po
Hi,
On 2025-08-19 12:29:14 -0500, Nathan Bossart wrote:
> On Tue, Aug 19, 2025 at 08:09:53AM +, Bertrand Drouvot wrote:
> > On Mon, Aug 18, 2025 at 05:53:44PM -0500, Sami Imseih wrote:
> >> > (or some other shmem-based
> >> > data structure we have yet to introduce, like a dslist/dsarray).
> >
On Tue, Aug 19, 2025 at 08:09:53AM +, Bertrand Drouvot wrote:
> On Mon, Aug 18, 2025 at 05:53:44PM -0500, Sami Imseih wrote:
>> > (or some other shmem-based
>> > data structure we have yet to introduce, like a dslist/dsarray).
>>
>> This will be an interesting API to invest time in, if there c
Hi,
On Mon, Aug 18, 2025 at 05:53:44PM -0500, Sami Imseih wrote:
> > I've been staring at the latest patch for a bit, and I'm a bit concerned
> > at how much complexity it adds.
>
> The complexity is the fact we have to track a dsa_pointer that points
> to an array of dsa_pointers that track the
> I've been staring at the latest patch for a bit, and I'm a bit concerned
> at how much complexity it adds.
The complexity is the fact we have to track a dsa_pointer that points
to an array of dsa_pointers that track the tranche names. We also
do have to copy the array of dsa_pointers into a new
On Mon, Aug 18, 2025 at 01:06:42PM -0500, Sami Imseih wrote:
> Attached is v10,
I've been staring at the latest patch for a bit, and I'm a bit concerned
at how much complexity it adds. I think it's a good idea to keep a local
array of tranche names indexed by tranche ID, but the code for managing
> see v9
Sorry about the quick follow-up I was not happy with one of the
comment updates in
SyncLWLockTrancheNames. I updated it.
Attached is v10,
--
Sami
v10-0001-Implement-a-DSA-for-LWLock-tranche-names.patch
Description: Binary data
v10-0002-Add-tests-for-LWLock-tranche-names-DSA.patch
D
> 1 ===
>
> +} LWLockTrancheNamesShmem;
>
> +} LWLockTrancheNamesStruct;
>
> Add LWLockTrancheNamesShmem and LWLockTrancheNamesStruct to
> src/tools/pgindent/typedefs.list?
done.
> 2 ===
>
> Maybe a comment before the above structs definitions to explain
Hi,
On Sat, Aug 16, 2025 at 10:18:05PM -0500, Sami Imseih wrote:
> Attached is v8.
Thanks for the new version!
A few random comments about 0001:
1 ===
+} LWLockTrancheNamesShmem;
+} LWLockTrancheNamesStruct;
Add LWLockTrancheNamesShmem and LWLockTran
>> See v7.
I fixed an earlier issue with Windows, which was due to not initializing the
shared memory inside
```
#ifdef EXEC_BACKEND
extern void AttachSharedMemoryStructs(void);
#endif
```
But then I found another one after. LWLockTrancheNames gets forked on Linux,
but of course that will not ha
> So we could do something like:
>
> int i = 0;
> while (i < LWLockTrancheNames.shmem->allocated &&
>!DsaPointerIsValid(shared_ptrs[i]))
> {
> i++;
> }
I found a different approach without tracking an additional counter.
Then sync stops when an invalid pointer is found after at
Hi,
On Tue, Aug 12, 2025 at 04:16:48PM -0500, Sami Imseih wrote:
> I spoke offline with Bertrand and discussed the synchronization of
> the local memory from shared memory. After that discussion it is clear
> we don't need to track the highest used index in shared memory. Because
> the tranche_id
> I haven't followed the latest discussion, but I took a look at the patch.
>
> + It is possible to use a tranche_id that was not
> retrieved
> + using LWLockNewTrancheId, but this is not
> recommended.
> + The ID may clash with an already registered tranche name, or the
> specifi
I haven't followed the latest discussion, but I took a look at the patch.
+ It is possible to use a tranche_id that was not
retrieved
+ using LWLockNewTrancheId, but this is not
recommended.
+ The ID may clash with an already registered tranche name, or the
specified
+ name
Hi,
Attached v6 which addresses the feedback from the last review.
1/ Rahila raised a point about the necessity to allocate new dsa pointers
for tranche names when copying ( during resize). That is correct. We can simply
memcpy those dsa pointers at the time of copy. All we need to
do is free the
>From this discussion, and the fact that the tranche name could come from
either local or shared memory, I think we should have tests.
So, I am working on adding tests using INJECTION_POINTS.
Some of the tests I have in mind now are:
1/ Does the shared memory grow correctly,
2/ Is the tranche nam
Thanks for testing!
> Why is it necessary to allocate a new dsa_pointer for tranche names that are
> the same size and then
> free the old one?
> Is there a reason we can't just assign new_ptrs[i] = old_ptrs[i]?
Fair point. I will updated in the next rev. We don't need to free the'
existing tran
Thanks for testing!
> which is what we expect.
>
> But if I look at LWLockTrancheNames.local (for the process that queried
> pg_stat_activity):
>
> (gdb) p LWLockTrancheNames.local[0]
> $1 = 0x0
> (gdb) p LWLockTrancheNames.local[1]
> $2 = 0x7b759a81b038 "BDT_Lck"
>
> It looks like there is an in
Hi,
I've begun reviewing this patch and have a few questions listed below:
1. + if (i < LWLockTrancheNames.shmem->allocated &&
DsaPointerIsValid(old_ptrs[i]))
Should an assert be used for the second condition instead?
Since for i < LWLockTrancheNames.shmem->allocated, the dsa pointer is
expect
Hi,
On Tue, Aug 05, 2025 at 11:24:04PM -0500, Sami Imseih wrote:
> > I'll fix this in the next rev.
>
> v5 fixes the above 2 issues found above.
Thanks!
> For the issue that was throwing " segment that has been freed",
> indeed we should be freeing LWLockTrancheNames.shmem->list_ptr,
Yeah.
> I'll fix this in the next rev.
v5 fixes the above 2 issues found above.
For the issue that was throwing " segment that has been freed",
indeed we should be freeing LWLockTrancheNames.shmem->list_ptr,
list_ptr is a dsa_pointer that stores an array of dsa_pointers, which
then made me realize
Thanks for reviewing!
> Issue 1 --
>
> If I register enough tranches to go to:
>
> + /* Resize if needed */
> + if (LWLockTrancheNames.shmem->count >=
> LWLockTrancheNames.shmem->allocated)
> + {
> + newalloc =
> pg_nextpower2_32(Ma
Hi,
On Mon, Aug 04, 2025 at 10:47:45PM -0500, Sami Imseih wrote:
> > > > With a local hash table, I don't think it's necessary to introduce new
> > > > code for managing
> > > > a DSA based list of tranche names as is done in v3. We can go back to
> > > > storing the shared
> > > > trance names in
> > > With a local hash table, I don't think it's necessary to introduce new
> > > code for managing
> > > a DSA based list of tranche names as is done in v3. We can go back to
> > > storing the shared
> > > trance names in dshash.
> > >
> > > What do you think?
> >
> > My first thought is that a p
> On Mon, Aug 04, 2025 at 11:32:19AM -0500, Sami Imseih wrote:
> > With a local hash table, I don't think it's necessary to introduce new
> > code for managing
> > a DSA based list of tranche names as is done in v3. We can go back to
> > storing the shared
> > trance names in dshash.
> >
> > What d
On Mon, Aug 04, 2025 at 11:32:19AM -0500, Sami Imseih wrote:
> With a local hash table, I don't think it's necessary to introduce new
> code for managing
> a DSA based list of tranche names as is done in v3. We can go back to
> storing the shared
> trance names in dshash.
>
> What do you think?
M
> > > I think we could add a local backend copy that stays up to date with the
> > > DSA. One idea would be to use an atomic counter to track the number of
> > > entries in the DSA and compare it with a local backend counter whenever
> > > the
> > > tranche name lookup occurs. If the atomic counte
> > I think we could add a local backend copy that stays up to date with the
> > DSA. One idea would be to use an atomic counter to track the number of
> > entries in the DSA and compare it with a local backend counter whenever the
> > tranche name lookup occurs. If the atomic counter is higher (si
Hi,
On Fri, Aug 01, 2025 at 01:15:24PM -0500, Sami Imseih wrote:
> I think we could add a local backend copy that stays up to date with the
> DSA. One idea would be to use an atomic counter to track the number of
> entries in the DSA and compare it with a local backend counter whenever the
> tranc
> > Also, I suspect that there might be some concerns about the API changes.
> > While it should be a very easy fix, that seems likely to break a lot of
> > extensions.
> > I don't know if it's possible to make this stuff backward
> > compatible, and I also don't know if we really want to, as that'
> > > Unlike the local list of tranche names, appending to and searching the
> > > shared memory list requires an LWLock; in exclusive mode to append, and
> > > shared mode to search.
> >
> > The thing that stood out the most to me is how much more expensive looking
> > up the lock name is. At the
Hi,
On Thu, Jul 31, 2025 at 04:24:38PM -0500, Nathan Bossart wrote:
> I've attached a rebased version of the patch.
>
> On Mon, Jul 21, 2025 at 11:26:44PM -0500, Sami Imseih wrote:
> > Unlike the local list of tranche names, appending to and searching the
> > shared memory list requires an LWLock
I've attached a rebased version of the patch.
On Mon, Jul 21, 2025 at 11:26:44PM -0500, Sami Imseih wrote:
> Unlike the local list of tranche names, appending to and searching the
> shared memory list requires an LWLock; in exclusive mode to append, and
> shared mode to search.
The thing that sto
> >> I was imagining putting the array in one big DSA allocation instead of
> >> carting around a pointer for each tranche name. (Sorry, I realize I am
> >> hand-waving over some of the details.)
> >
> > I understood it like this. Here is a sketch:
> >
> > ```
> > dsa_pointer p;
> >
> > dsa = dsa_
> > Hi,
> >
> > If a dshash table is used to store tranche names and IDs, where would the
> > tranche name for this table
> > be registered?
>
> I guess it could be a new BuiltinTrancheId for this dsa but not sure what
> Nathan
> and Sami have in mind.
Yes, it will be a BuiltinTrancheId for a sha
Hi,
On Tue, Jul 15, 2025 at 12:59:04PM +0530, Rahila Syed wrote:
> Hi,
>
> If a dshash table is used to store tranche names and IDs, where would the
> tranche name for this table
> be registered?
I guess it could be a new BuiltinTrancheId for this dsa but not sure what Nathan
and Sami have in mi
On Tue, Jul 15, 2025 at 12:06:00PM -0500, Sami Imseih wrote:
> On Tue, Jul 15, 2025 at 11:57 AM Nathan Bossart
> wrote:
>> I was imagining putting the array in one big DSA allocation instead of
>> carting around a pointer for each tranche name. (Sorry, I realize I am
>> hand-waving over some of t
On Tue, Jul 15, 2025 at 11:57 AM Nathan Bossart
wrote:
>
> On Tue, Jul 15, 2025 at 11:52:19AM -0500, Sami Imseih wrote:
> >> Another random thought: I worry that the dshash approach might be quite a
> >> bit slower, and IIUC we just need to map an integer to a string. Maybe we
> >> should just us
On Tue, Jul 15, 2025 at 11:52:19AM -0500, Sami Imseih wrote:
>> Another random thought: I worry that the dshash approach might be quite a
>> bit slower, and IIUC we just need to map an integer to a string. Maybe we
>> should just use a DSA for LWLockTrancheNames. IOW we'd leave it as a char**
>> b
> Another random thought: I worry that the dshash approach might be quite a
> bit slower, and IIUC we just need to map an integer to a string. Maybe we
> should just use a DSA for LWLockTrancheNames. IOW we'd leave it as a char**
> but put it in shared memory.
To use DSA just for this purpose, we
Hi,
On Fri, Jul 11, 2025 at 04:32:13PM -0500, Sami Imseih wrote:
> > > and instead reuse the existing static hash table, which is
> > > capped at 128 custom wait events:
> > >
> > > ```
> > > #define WAIT_EVENT_CUSTOM_HASH_MAX_SIZE 128
> > > ```
> >
> > That's probably still high enough, thoughts?
Hi,
>
> Since DSM is not available during postmaster, if we were to create a DSA
> segment in place, similar to what's done in StatsShmemInit(), we would also
> need to ensure that the initial shared memory is sized appropriately. This
> is
> because it would need to be large enough to accommodat
On Mon, Jul 14, 2025 at 03:45:02PM -0500, Sami Imseih wrote:
>> Ah, I missed the problem with postmaster. Could we have the first backend
>> that needs to access the table be responsible for creating it and
>> populating it with the built-in/requested-at-startup entries?
>
> We can certainly main
> Ah, I missed the problem with postmaster. Could we have the first backend
> that needs to access the table be responsible for creating it and
> populating it with the built-in/requested-at-startup entries?
We can certainly maintain a flag in the shared state that is set once
the first backend l
Nathan Bossart writes:
> Ah, I missed the problem with postmaster. Could we have the first backend
> that needs to access the table be responsible for creating it and
> populating it with the built-in/requested-at-startup entries? Also, is
> there any chance that postmaster might need to access
On Mon, Jul 14, 2025 at 02:34:00PM -0500, Sami Imseih wrote:
>> Why do we need three different places for the lock names? Is there a
>> reason we can't put it all in shared memory?
>
> The real reason I felt it was better to keep three separate locations is that
> it allows for a clear separation
>> Attached is a proof of concept that does not alter the
>> LWLockRegisterTranche API.
> IMHO we should consider modifying the API, because right now you have to
> call LWLockRegisterTranche() in each backend. Why not accept the name as
> an argument for LWLockNewTrancheId() and only require it t
On Fri, Jul 11, 2025 at 04:32:13PM -0500, Sami Imseih wrote:
> Now, what I think will be a good API is to provide an alternative to
> LWLockRegisterTranche,
> which now takes in both a tranche ID and tranche_name. The new API I propose
> is
> LWLockRegisterTrancheWaitEventCustom which takes only a
> > and instead reuse the existing static hash table, which is
> > capped at 128 custom wait events:
> >
> > ```
> > #define WAIT_EVENT_CUSTOM_HASH_MAX_SIZE 128
> > ```
>
> That's probably still high enough, thoughts?
I have no reason to believe that this number could be too low.
I am not aware of
On Wed, Jul 09, 2025 at 04:39:48PM -0500, Sami Imseih wrote:
> Attached is a proof of concept that does not alter the
> LWLockRegisterTranche API.
IMHO we should consider modifying the API, because right now you have to
call LWLockRegisterTranche() in each backend. Why not accept the name as
an a
Hi,
On Thu, Jul 10, 2025 at 04:34:34PM -0500, Sami Imseih wrote:
> Thanks for the feedback!
>
> > > Attached is a proof of concept that does not alter the
> > > LWLockRegisterTranche API. Instead, it detects when a registration is
> > > performed by a normal backend and stores the tranche name in
Thanks for the feedback!
> > Attached is a proof of concept that does not alter the
> > LWLockRegisterTranche API. Instead, it detects when a registration is
> > performed by a normal backend and stores the tranche name in shared memory,
> > using a dshash keyed by tranche ID. Tranche name lookup
Hi,
On Wed, Jul 09, 2025 at 04:39:48PM -0500, Sami Imseih wrote:
> Hi,
>
> When querying pg_stat_activity, the function pgstat_get_wait_event is
> called, which internally uses GetLWLockIdentifier and GetLWTrancheName
> to map the LWLock to its tranche name. If the backend does not recognize
> th
Hi,
This is a follow-up to a discussion started in [0].
LWLocks in PostgreSQL are categorized into tranches, and the tranche name
appears as the wait_event in pg_stat_activity. There are both built-in
tranche names and tranche names that can be registered by extensions using
RequestNamedLWLockTra
96 matches
Mail list logo