David Rowley <dgrowle...@gmail.com> writes: > On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.steh...@gmail.com> wrote: >> I found a query that is significantly slower with more memory
> If it was work_mem you increased, it seems strange that the plan would > switch over to using a Nested Loop / Memoize plan. Yeah, there's something unexplained there. I think that the most probable explanation for the symptoms is that cost_memoize_rescan is computing some insane value for est_entries, causing ExecInitMemoize to allocate-and-zero a huge hash table, which ExecEndMemoize then frees again. Neither of those steps gets counted into any plan node's runtime, but EXPLAIN's total execution time will include them. An insane value for est_entries could perhaps go along with a cost misestimate that convinces the planner to include the memoize even though it seems pointless. I spent some time studying cost_memoize_rescan, and the only conclusions I arrived at were that the variable names are poorly chosen and the comments are unhelpful. For instance, one would think that est_entry_bytes is the expected size of one cache entry, but it seems to actually be the total space that would be occupied if the whole input relation were loaded into the cache. And the est_cache_entries computation seems nonsensical; if it does make sense, the comment sure doesn't illuminate why. So I am quite prepared to buy into the idea that cost_memoize_rescan is producing bogus answers, but it's hard to tell what it's coming out with in this example. Too bad EXPLAIN doesn't print est_entries. regards, tom lane