On Tue, May 19, 2020 at 4:33 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > On Tue, May 19, 2020 at 3:31 PM Dilip Kumar <dilipbal...@gmail.com> wrote: > > > > On Tue, May 19, 2020 at 2:34 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > > > > > On Mon, May 18, 2020 at 5:57 PM Amit Kapila <amit.kapil...@gmail.com> > > > wrote: > > > > > > > > > > > > 3. > > > > + /* > > > > + * If streaming is enable and we have serialized this transaction > > > > because > > > > + * it had incomplete tuple. So if now we have got the complete tuple > > > > we > > > > + * can stream it. > > > > + */ > > > > + if (ReorderBufferCanStream(rb) && can_stream && > > > > rbtxn_is_serialized(toptxn) > > > > + && !(rbtxn_has_toast_insert(txn)) && !(rbtxn_has_spec_insert(txn))) > > > > + { > > > > > > > > This comment is just saying what you are doing in the if-check. I > > > > think you need to explain the rationale behind it. I don't like the > > > > variable name 'can_stream' because it matches ReorderBufferCanStream > > > > whereas it is for a different purpose, how about naming it as > > > > 'change_complete' or something like that. The check has many > > > > conditions, can we move it to a separate function to make the code > > > > here look clean? > > > > > > > > > > Do we really need this? Immediately after this check, we are calling > > > ReorderBufferCheckMemoryLimit which will anyway stream the changes if > > > required. > > > > Actually, ReorderBufferCheckMemoryLimit is only meant for checking > > whether we need to stream the changes due to the memory limit. But > > suppose when memory limit exceeds that time we could not stream the > > transaction because there was only incomplete toast insert so we > > serialized. Now, when we get the tuple which makes the changes > > complete but now it is not crossing the memory limit as changes were > > already serialized. So I am not sure whether it is a good idea to > > stream the transaction as soon as we get the complete changes or we > > shall wait till next time memory limit exceed and that time we select > > the suitable candidate. > > > > I think it is better to wait till next time we exceed the memory threshold.
Okay, done this way. > > Ideally, we were are in streaming more and > > the transaction is serialized means it was already a candidate for > > streaming but could not stream due to the incomplete changes so > > shouldn't we stream it immediately as soon as its changes are complete > > even though now we are in memory limit. > > > > The only time we need to stream or spill is when we exceed memory > threshold. In the above case, it is possible that next time there is > some other candidate transaction that we can stream. > > > > > > > Another comments on v20-0010-Bugfix-handling-of-incomplete-toast-tuple: > > > > > > + else if (rbtxn_has_toast_insert(txn) && > > > + ChangeIsInsertOrUpdate(change->action)) > > > + { > > > + toptxn->txn_flags &= ~RBTXN_HAS_TOAST_INSERT; > > > + can_stream = true; > > > + } > > > .. > > > +#define ChangeIsInsertOrUpdate(action) \ > > > + (((action) == REORDER_BUFFER_CHANGE_INSERT) || \ > > > + ((action) == REORDER_BUFFER_CHANGE_UPDATE) || \ > > > + ((action) == REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT)) > > > > > > How can we clear the RBTXN_HAS_TOAST_INSERT flag on > > > REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT action? > > > > Partial toast insert means we have inserted in the toast but not in > > the main table. So even if it is spec insert we can form the complete > > tuple, however, we can still not stream it because we haven't got > > spec_confirm but for that, we are marking another flag. So if the > > insert is aspect insert the toast insert will also be spec insert and > > as part of that toast, spec inserts we are marking partial tuple so > > cleaning that flag should happen when the spec insert is done for the > > main table right? > > > > Sounds reasonable. ok -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com