> Right, because of the reason I posted [1].
>
> I updated the patch which did the same approach. It could pass my CI.
> Could you please apply on 17.2 and test it?
>
> [1]:
> https://www.postgresql.org/message-id/OSCPR01MB14966B646506E0C9B81B3A4CFF5022%40OSCPR01MB14966.jpnprd01.prod.outlook.com
> I came up with an alternate approach. In this approach we keep track
> of wal segment the transaction is part of. This helps to iterate
> through only required files during clean up.
>
> On my machine, I am running the testcase provided by you in [1]. It is
> generating ~1.9 million spill files.
> Can you enable the parameter "streaming" to on on your system [1]? It allows
> to
> stream the in-progress transactions to the subscriber side. I feel this can
> avoid
> the case that there are many .spill files on the publisher side.
> Another approach is to tune the logical_decoding_work_mem
Hi,
Thanks for sharing the test case.
Unfortunately I donot have a powerful machine which would generate
such large number of spill files. But I created a patch as per your
suggestion in point(2) in thread [1]. Can you test with this patch on
your machine?
With this patch instead of calling unlin
ubtransactions
- Although the transaction lasted 17s, one can see that the decoding was a bit
late (40 seconds), but
- spent an extra 200s to delete the spill files !
On Wed, 6 Nov 2024 at 13:07, RECHTÉ Marc wrote:
>
> Hello,
>
> For some unknown reason (probably a very big tran
Hello,
For some unknown reason (probably a very big transaction at the source), we
experienced a logical decoding breakdown,
due to a timeout from the subscriber side (either wal_receiver_timeout or
connexion drop by network equipment due to inactivity).
The problem is, that due to that fail