On Tue, May 24, 2022 at 07:40:45PM -0400, Tom Lane wrote: > Bruce Momjian <br...@momjian.us> writes: > > If the plan output is independent of work_mem, > > ... it isn't ...
Good. > > I always wondered why we > > didn't just determine the number of simultaneous memory requests in the > > plan and just allocate accordingly, e.g. if there are four simultaneous > > memory requests in the plan, each gets work_mem/4. > > (1) There are not a predetermined number of allocations. For example, > if we do a given join as nestloop+inner index scan, that doesn't require > any large amount of memory; but if we do it as merge or hash join then > it will consume memory. Uh, we know from the plan whether we are doing a nestloop+inner or merge or hash join, right? I was suggesting we look at the plan before execution and set the proper percentage of work_mem for each node. > (2) They may not all need the same amount of memory, eg joins might > be working on different amounts of data. True. but we could cap it like we do now for work_mem, but as a percentage of a GUC work_mem total. -- Bruce Momjian <br...@momjian.us> https://momjian.us EDB https://enterprisedb.com Indecision is a decision. Inaction is an action. Mark Batterson