> On 12 Sep 2022, at 23:01, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
> Andrey Borodin <x4...@yandex-team.ru> writes:
>>> On 12 Sep 2022, at 18:18, Julien Rouhaud <rjuju...@gmail.com> wrote:
>>> That being
>>> said I don't know if adding a timeout would be too expensive for the lwlock
>>> infrastructure.
>
> I want to object fiercely to loading down LWLock with anything like
> timeouts. It's supposed to be "lightweight". If we get away from
> that we're just going to find ourselves needing another lighter-weight
> lock mechanism.
Thanks for clarifying this out, Tom. I agree that spreading timeout-based
algorithms is not a good thing. And when you have a hammer - everything seems
like a nail, so it would be temping to use timeout here and there.
> On 12 Sep 2022, at 23:08, Julien Rouhaud <rjuju...@gmail.com> wrote:
>
> That's what I was thinking, so it looks like a show-stopper for the proposed
> patch.
So, the only option to make things configurable is a switch for
waiting\waitless locks.
And the other way is refactoring towards partitioned hashtable, namely dshash.
But I don't see how partitioned locking can save us from a locking disaster.
Problem is caused by reading all the pgss view colliding with reset() or GC.
Both this operations deal with each partition - they will conflict anyway, with
the same result. Time-consuming read of each partition will prevent exclusive
lock by reset(), and queued exclusive lock will prevent any reads from
hashtable.
Best regards, Andrey Borodin.