On Mon, Oct 9, 2023 at 6:25 AM David Rowley <dgrowle...@gmail.com> wrote: > > However, it may also be worth you reading over [3] and the ultimate > reason I changed my mind on that being a good idea. Pushing LIMITs > below an Append seems quite incomplete when we don't yet push sorts > below Appends, which is what that patch did.
When the paths are already ordered according to ORDER BY specification, pushing down LIMIT will give them extra benefit of being cost effective. Do you think we can proceed along those lines? Later when we implement Sorting push down we will adjust the LIMIT pushdown code for the same. > I just was not > comfortable proceeding with [3] as nodeSort.c holds onto the tuplesort > until executor shutdown. That'll be done for rescan reasons, but it > does mean if you pushed Sort below Append that we could have a very > large number of sort nodes holding onto work_mem all at once. I find > that a bit scary, especially so given the excessive partitioning cases > I've seen and worked on recently. I did consider if we maybe could > adjust nodeSort.c to do tuplesort_end() after the final row. We'd need > to only do that if we could be somehow certain there were going to be > no rescans. I don't have a plan on how that would be detected. We have that problem with partitionwise join. Have you seen it in the field? I have not seen such reports but that could be because not many know the partitionwise join needs to be explicitly turned ON. The solution we will develop here will solve problem with partitionwise join as well. It's hard to solve this problem. If there's a real case where LIMIT pushdown helps without fixing Sort pushdown case, it might help proceeding with the same. -- Best Wishes, Ashutosh Bapat