On Tue, Oct 1, 2019 at 7:21 PM Tomas Vondra <tomas.von...@2ndquadrant.com>
wrote:

> On Tue, Oct 01, 2019 at 06:55:52PM +0530, Amit Kapila wrote:
> >
> >On further testing, I found that the patch seems to have problems with
> >toast.  Consider below scenario:
> >Session-1
> >Create table large_text(t1 text);
> >INSERT INTO large_text
> >SELECT (SELECT string_agg('x', ',')
> >FROM generate_series(1, 1000000)) FROM generate_series(1, 1000);
> >
> >Session-2
> >SELECT * FROM pg_create_logical_replication_slot('regression_slot',
> >'test_decoding');
> >SELECT * FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL);
> >*--kaboom*
> >
> >The second statement in Session-2 leads to a crash.
> >
>
> OK, thanks for the report - will investigate.
>

It was an assertion failure in ReorderBufferCleanupTXN at below line:
+ /* Check we're not mixing changes from different transactions. */
+ Assert(change->txn == txn);



> >Other than that, I am not sure if the changes related to spill to disk
> >after logical_decoding_work_mem works for toast table as I couldn't hit
> >that code for toast table case, but I might be missing something.  As
> >mentioned previously, I feel there should be some way to test whether this
> >patch works for the cases it claims to work.  As of now, I have to check
> >via debugging.  Let me know if there is any way, I can test this.
> >
>
> That's one of the reasons why I proposed to move the statistics (which
> say how many transactions / bytes were spilled to disk) from a later
> patch in the series. I don't think there's a better way.
>
>
I like that idea, but I think you need to split that patch to only get the
stats related to the spill.  It would be easier to review if you can
prepare that atop of
0001-Add-logical_decoding_work_mem-to-limit-ReorderBuffer.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Reply via email to