On Thursday, October 1, 2020 4:52 PM, Tsunakawa-san wrote:
 
> From: Kyotaro Horiguchi <horikyota....@gmail.com>
> > I thought that the advantage of this optimization is that we don't
> > need to visit all buffers?  If we need to run a full-scan for any
> > reason, there's no point in looking-up already-visited buffers again.
> > That's just wastefull cycles.  Am I missing somethig?
> >
> > I don't understand. If we chose to the optimized dropping, the reason
> > is the number of buffer lookup is fewer than a certain threashold. Why
> > do you think that the fork kind a buffer belongs to is relevant to the
> > criteria?
> 
> I rethought about this, and you certainly have a point, but...  OK, I think I
> understood.  I should have thought in a complicated way.  In other words,
> you're suggesting "Let's simply treat all forks as one relation to determine
> whether to optimize," right?  That is, the code simple becomes:
> 
> Sums up the number of buffers to invalidate in all forks; if (the cached sizes
> of all forks are valid && # of buffers to invalidate < THRESHOLD) {
>       do the optimized way;
>       return;
> }
> do the traditional way;
> 
> This will be simple, and I'm +1.

This is actually close to the v18 I posted trying Horiguchi-san's approach, but 
that
patch had bug. So attached is an updated version (v20) trying this approach 
again.
I hope it's bug-free this time.

Regards,
Kirk Jamison
 

Attachment: v20-Optimize-DropRelFileNodeBuffers-during-recovery.patch
Description: v20-Optimize-DropRelFileNodeBuffers-during-recovery.patch

Attachment: v1-Prevent-invalidating-blocks-in-smgrextend-during-recovery.patch
Description: v1-Prevent-invalidating-blocks-in-smgrextend-during-recovery.patch

Reply via email to