On Sun, Jan 12, 2020 at 9:51 AM Tom Lane <t...@sss.pgh.pa.us> wrote: > > Tomas Vondra <tomas.von...@2ndquadrant.com> writes: > > On Sat, Jan 11, 2020 at 10:53:57PM -0500, Tom Lane wrote: > >> remind me where the win came from, exactly? > > > Well, the problem is that in 10 we allocate tuple data in the main > > memory ReorderBuffer context, and when the transaction gets decoded we > > pfree() it. But in AllocSet that only moves the data to the freelists, > > it does not release it entirely. So with the right allocation pattern > > (sufficiently diverse chunk sizes) this can easily result in allocation > > of large amount of memory that is never released. > > > I don't know if this is what's happening in this particular test, but I > > wouldn't be surprised by it. > > Nah, don't think I believe that: the test inserts a bunch of tuples, > but they look like they will all be *exactly* the same size. > > CREATE TABLE decoding_test(x integer, y text); > ... > > FOR i IN 1..10 LOOP > BEGIN > INSERT INTO decoding_test(x) SELECT generate_series(1,5000); > EXCEPTION > when division_by_zero then perform 'dummy'; > END; > I performed the same test in pg11 and reproduced the issue on the commit prior to a4ccc1cef5a04 (Generational memory allocator).
ulimit -s 1024 ulimit -v 300000 wal_level = logical max_replication_slots = 4 And executed the following code snippet (shared by Amit Khandekar earlier in the thread). SELECT pg_create_logical_replication_slot('test_slot', 'test_decoding'); CREATE TABLE decoding_test(x integer, y text); do $$ BEGIN FOR i IN 1..10 LOOP BEGIN INSERT INTO decoding_test(x) SELECT generate_series(1,3000); EXCEPTION when division_by_zero then perform 'dummy'; END; END LOOP; END $$; SELECT data from pg_logical_slot_get_changes('test_slot', NULL, NULL) LIMIT 10; I got the following error: ERROR: out of memory DETAIL: Failed on request of size 8208. After that, I applied the "Generational memory allocator" patch and that solved the issue. From the error message, it is evident that the underlying code is trying to allocate a MaxTupleSize memory for each tuple. So, I re-introduced the following lines (which are removed by a4ccc1cef5a04) on top of the patch: --- a/src/backend/replication/logical/reorderbuffer.c +++ b/src/backend/replication/logical/reorderbuffer.c @@ -417,6 +417,9 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len) alloc_len = tuple_len + SizeofHeapTupleHeader; + if (alloc_len < MaxHeapTupleSize) + alloc_len = MaxHeapTupleSize; And, the issue got reproduced with the same error: WARNING: problem in Generation Tuples: number of free chunks 0 in block 0x7fe9e9e74010 exceeds 1018 allocated ..... ERROR: out of memory DETAIL: Failed on request of size 8208. I don't understand the code well enough to comment whether we can back-patch only this part of the code. But, this seems to allocate a huge amount of memory per chunk although the tuple is small. Thoughts? -- Thanks & Regards, Kuntal Ghosh EnterpriseDB: http://www.enterprisedb.com