On Tue, Jun 1, 2021 at 8:01 PM Dilip Kumar <dilipbal...@gmail.com> wrote: > > > The attached patch fixes by queuing the spec abort change and cleaning > up the toast hash on spec abort. Currently, in this patch I am > queuing up all the spec abort changes, but as an optimization we can > avoid > queuing the spec abort for toast tables but for that we need to log > that as a flag in WAL. that this XLH_DELETE_IS_SUPER is for a toast > relation. >
I don't think that is required especially because we intend to backpatch this, so I would like to keep such optimization for another day. Few comments: Comments: ------------ /* * Super deletions are irrelevant for logical decoding, it's driven by the * confirmation records. */ 1. The above comment is not required after your other changes. /* * Either speculative insertion was confirmed, or it was * unsuccessful and the record isn't needed anymore. */ if (specinsert != NULL) 2. The above comment needs some adjustment. /* * There's a speculative insertion remaining, just clean in up, it * can't have been successful, otherwise we'd gotten a confirmation * record. */ if (specinsert) { ReorderBufferReturnChange(rb, specinsert, true); specinsert = NULL; } 3. Ideally, we should have an Assert here because we shouldn't reach without cleaning up specinsert. If there is still a chance then we should mention that in the comments. 4. + case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT: + + /* + * Abort for speculative insertion arrived. I think here we should explain why we can't piggyback cleanup on next insert/update/delete. 5. Can we write a test case for it? I guess we don't need to use multiple sessions if the conflicting record is already present. Please see if the same patch works on back-branches? I guess this makes the change bit tricky as it involves decoding a new message but not sure if there is a better way. -- With Regards, Amit Kapila.