On Fri, Nov 27, 2020 at 8:10 AM David Rowley <dgrowle...@gmail.com> wrote:

> Thanks for having another look at this.
>
> > On Sun, Nov 22, 2020 at 9:21 PM Andy Fan <zhihui.fan1...@gmail.com>
> wrote:
> > add 2 more comments.
> >
> > 1. I'd suggest adding Assert(false); in RC_END_OF_SCAN  case to make the
> error clearer.
> >
> > case RC_END_OF_SCAN:
> > /*
> > * We've already returned NULL for this scan, but just in case
> > * something call us again by mistake.
> > */
> > return NULL;
>
> This just took some inspiration from nodeMaterial.c where it says:
>
> /*
> * If necessary, try to fetch another row from the subplan.
> *
> * Note: the eof_underlying state variable exists to short-circuit further
> * subplan calls.  It's not optional, unfortunately, because some plan
> * node types are not robust about being called again when they've already
> * returned NULL.
> */
>
> I'm not feeling a pressing need to put an Assert(false); in there as
> it's not what nodeMaterial.c does.  nodeMaterial is nodeResultCache's
> sister node which can also be seen below Nested Loops.
>
>
OK, even though I am not quite understanding the above now,  I will try to
figure it
by myself. I'm OK with this decision.




> > 2. Currently we handle the (!cache_store_tuple(node, outerslot))) case
> by set it
> >    to RC_CACHE_BYPASS_MODE. The only reason for the cache_store_tuple
> failure is
> >    we can't cache_reduce_memory.  I guess if cache_reduce_memory
> >    failed once, it would not succeed later(no more tuples can be stored,
> >    nothing is changed). So I think we can record this state and avoid
> any new
> >    cache_reduce_memory call.
> >
> > /*
> > * If we failed to create the entry or failed to store the
> > * tuple in the entry, then go into bypass mode.
> > */
> > if (unlikely(entry == NULL ||
> > !cache_store_tuple(node, outerslot)))
> >
> >  to
> >
> > if (unlikely(entry == NULL || node->memory_cant_be_reduced ||
> > !cache_store_tuple(node, outerslot)))
>
> The reason for RC_CACHE_BYPASS_MODE is if there's a single set of
> parameters that have so many results that they, alone, don't fit in
> the cache. We call cache_reduce_memory() whenever we go over our
> memory budget. That function returns false if it was unable to free
> enough memory without removing the "specialkey", which in this case is
> the current cache entry that's being populated. Later, when we're
> caching some entry that isn't quite so large, we still want to be able
> to cache that.  In that case, we'll have removed the remnants of the
> overly large entry that didn't fit to way for newer and, hopefully,
> smaller entries. No problems.  I'm not sure why there's a need for
> another flag here.
>
>
Thanks for the explanation, I'm sure I made some mistakes before at
this part.


-- 
Best Regards
Andy Fan

Reply via email to