Thanks, Tomas.

> Theoretically, yes, we could make max_changes_in_memory a GUC, but it's
> not clear to me how would that help 12/13, because there's ~0% chance
> we'd backpatch that ...

What I mean is not about back-patch work. Things should happen on publisher
side?

Consider when the publisher is a PostgreSQL v14+~master (with streaming
support) and subscriber is a 12/13 where streaming is not supported, the
publisher
would still have the risk of OOM. The same thing should happen when we use a
v14+~master as publisher and a whatever open source CDC as subscriber.

> Wouldn't it be better to have adjusts the value automatically, somehow?
> For example, before restoring the changes, we could count the number of
> transactions, and set it to 4096/ntransactions or something like that.
> Or do something smarter by estimating tuple size, to count it in the
> logical__decoding_work_mem budget.

Yes, I think this issue should have been solved when
logical_decoding_work_mem
was initially been introduced, but it didn't. There could be some reasons
like
sub-transaction stuff which has been commented in the header of
reorderbuffer.c.

regards, Jingtang

Reply via email to