> 30 нояб. 2021 г., в 17:19, Simon Riggs <simon.ri...@enterprisedb.com> 
> написал(а):
> 
> On Mon, 30 Aug 2021 at 11:25, Andrey Borodin <x4...@yandex-team.ru> wrote:
>> 
>> Hi Pengcheng!
>> 
>> You are solving important problem, thank you!
>> 
>>> 30 авг. 2021 г., в 13:43, Pengchengliu <pengcheng...@tju.edu.cn> написал(а):
>>> 
>>> To resolve this performance problem, we think about a solution which cache
>>> SubtransSLRU to local cache.
>>> First we can query parent transaction id from SubtransSLRU, and copy the
>>> SLRU page to local cache page.
>>> After that if we need query parent transaction id again, we can query it
>>> from local cache directly.
>> 
>> A copy of SLRU in each backend's cache can consume a lot of memory.
> 
> Yes, copying the whole SLRU into local cache seems overkill.
> 
>> Why create a copy if we can optimise shared representation of SLRU?
> 
> transam.c uses a single item cache to prevent thrashing from repeated
> lookups, which reduces problems with shared access to SLRUs.
> multitrans.c also has similar.
> 
> I notice that subtrans. doesn't have this, but could easily do so.
> Patch attached, which seems separate to other attempts at tuning.
I think this definitely makes sense to do.


> On review, I think it is also possible that we update subtrans ONLY if
> someone uses >PGPROC_MAX_CACHED_SUBXIDS.
> This would make subtrans much smaller and avoid one-entry-per-page
> which is a major source of cacheing.
> This would means some light changes in GetSnapshotData().
> Let me know if that seems interesting also?

I'm afraid of unexpected performance degradation. When the system runs fine, 
you provision a VM of some vCPU\RAM, and then some backend uses a little more 
than 64 subtransactions and all the system is stuck. Or will it affect only 
backend using more than 64 subtransactions?

Best regards, Andrey Borodin.



Reply via email to