Hi,

On 2019-02-19 12:52:08 +1300, David Rowley wrote:
> On Tue, 19 Feb 2019 at 12:42, Tom Lane <t...@sss.pgh.pa.us> wrote:
> > My own thought about how to improve this situation was just to destroy
> > and recreate LockMethodLocalHash at transaction end (or start)
> > if its size exceeded $some-value.  Leaving it permanently bloated seems
> > like possibly a bad idea, even if we get rid of all the hash_seq_searches
> > on it.
> 
> That seems like a good idea. Although, it would be good to know that
> it didn't add too much overhead dropping and recreating the table when
> every transaction happened to obtain more locks than $some-value.  If
> it did, then maybe we could track the average locks per of recent
> transactions and just ditch the table after the locks are released if
> the locks held by the last transaction exceeded the average *
> 1.something. No need to go near shared memory to do that.

Isn't a large portion of benefits in this patch going to be mooted by
the locking improvements discussed in the other threads? I.e. there's
hopefully not going to be a ton of cases with low overhead where we
acquire a lot of locks and release them very soon after. Sure, for DDL
etc we will, but I can't see this mattering from a performance POV?

I'm not against doing something like Tom proposes, but heuristics with
magic constants like this tend to age purely / are hard to tune well
across systems.

Greetings,

Andres Freund

Reply via email to