On Tue, Sep 13, 2022 at 10:38:13AM +0500, Andrey Borodin wrote:
> 
> And the other way is refactoring towards partitioned hashtable, namely
> dshash. But I don't see how partitioned locking can save us from a locking
> disaster. Problem is caused by reading all the pgss view colliding with
> reset() or GC.

If you store the query texts in DSM, you won't have a query text file to
maintain and the GC problem will disappear.

> Both this operations deal with each partition - they will
> conflict anyway, with the same result. Time-consuming read of each partition
> will prevent exclusive lock by reset(), and queued exclusive lock will
> prevent any reads from hashtable.

Conflicts would still be possible, just less likely and less long as the whole
dshash is never locked globally, just one partition at a time (except when the
dshash is resized, but the locks aren't held for a long time and it's not
something frequent).

But the biggest improvements should be gained by reusing the pgstats
infrastructure.  I only had a glance at it so I don't know much about it, but
it has a per-backend hashtable to cache some information and avoid too many
accesses on the shared hash table, and a mechanism to accumulate entries and do
batch updates.


Reply via email to