On 7/22/24 05:28, Jingtang Zhang wrote: > Thanks, Tomas. > >> Theoretically, yes, we could make max_changes_in_memory a GUC, but it's >> not clear to me how would that help 12/13, because there's ~0% chance >> we'd backpatch that ... > > What I mean is not about back-patch work. Things should happen on publisher > side? > > Consider when the publisher is a PostgreSQL v14+~master (with streaming > support) and subscriber is a 12/13 where streaming is not supported, the > publisher > would still have the risk of OOM. The same thing should happen when we use a > v14+~master as publisher and a whatever open source CDC as subscriber. >
Yes, you're right - if we're talking about mixed setups with just the subscriber running the old version, then it would benefit from this improvement even without backpatching. >> Wouldn't it be better to have adjusts the value automatically, somehow? >> For example, before restoring the changes, we could count the number of >> transactions, and set it to 4096/ntransactions or something like that. >> Or do something smarter by estimating tuple size, to count it in the >> logical__decoding_work_mem budget. > > Yes, I think this issue should have been solved when > logical_decoding_work_mem > was initially been introduced, but it didn't. There could be some reasons > like > sub-transaction stuff which has been commented in the header of > reorderbuffer.c. > True, but few patches are perfect/complete from V1. There's often stuff that's unlikely to happen, left for future improvements. And this is one of those cases, I believe. The fact that the comment even mentions this is a sign the developers considered this, and chose to ignore it. That being said, I think it'd be nice to improve this, and I'm willing to take a look if someone prepares a patch. But I don't think making this a GUC is the right approach - it's the simplest patch, but every new GUC just makes the database harder to manage. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company