Hi, On 2020-02-04 10:15:01 +0530, Kuntal Ghosh wrote: > I performed the same test in pg11 and reproduced the issue on the > commit prior to a4ccc1cef5a04 (Generational memory allocator). > > ulimit -s 1024 > ulimit -v 300000 > > wal_level = logical > max_replication_slots = 4 > > [...]
> After that, I applied the "Generational memory allocator" patch and > that solved the issue. From the error message, it is evident that the > underlying code is trying to allocate a MaxTupleSize memory for each > tuple. So, I re-introduced the following lines (which are removed by > a4ccc1cef5a04) on top of the patch: > --- a/src/backend/replication/logical/reorderbuffer.c > +++ b/src/backend/replication/logical/reorderbuffer.c > @@ -417,6 +417,9 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size > tuple_len) > > alloc_len = tuple_len + SizeofHeapTupleHeader; > > + if (alloc_len < MaxHeapTupleSize) > + alloc_len = MaxHeapTupleSize; Maybe I'm being slow here - but what does this actually prove? Before the generation contexts were introduced we avoided fragmentation (which would make things unusably slow) using a a brute force method (namely forcing all tuple allocations to be of the same/maximum size). Which means that yes, we'll need more memory than necessary. Do you think you see anything but that here? It's good that the situation is better now, but I don't think this means we need to necessarily backpatch something nontrivial? Greetings, Andres Freund