On Fri, Jul 10, 2020 at 9:21 AM Ajin Cherian <itsa...@gmail.com> wrote:
>
>
>
> On Thu, Jul 9, 2020 at 1:30 PM Amit Kapila <amit.kapil...@gmail.com> wrote:
>>
>>
>> > I think if the GUC is set then maybe we can bypass this check so that
>> > it can try to stream every single change?
>> >
>>
>> Yeah and probably we need to do something for the check "while
>> (rb->size >= logical_decoding_work_mem * 1024L)" as well.
>>
>>
> I have made this change, as discussed, the regression tests seem to run fine. 
> I have added a debug that records the streaming for each transaction >number. 
> I also had to bypass certain asserts in ReorderBufferLargestTopTXN() as now 
> we are going through the entire list of transactions and not just picking the 
> biggest transaction .

So if always_stream_logical is true then we are always going for the
streaming even if the size is not reached and that is good.  And if
always_stream_logical is set then we are setting ctx->streaming=true
that is also good.  So now I don't think we need to change this part
of the code, because when we bypass the memory limit and set the
ctx->streaming=true it will always select the streaming option unless
it is impossible.  With your changes sometimes due to incomplete toast
changes, if it can not pick the largest top txn for streaming it will
hang forever in the while loop, in that case, it should go for
spilling.

while (rb->size >= logical_decoding_work_mem * 1024L)
{
/*
* Pick the largest transaction (or subtransaction) and evict it from
* memory by streaming, if supported. Otherwise, spill to disk.
*/
if (ReorderBufferCanStream(rb) &&
(txn = ReorderBufferLargestTopTXN(rb)) != NULL)


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


Reply via email to