Bruce Momjian <[EMAIL PROTECTED]> writes: >> I didn't think there was. just hoping... :-)
> Someone asked for this in Copenhagen, and I said we can't see how to do > it. The only idea I had as to give the first requestor 50% of the > total, then a second query 50% of the remaining memory. Is that better > than what we have? How would you do that --- who's the "first requestor"? The delay between planning and execution for prepared statements (including plpgsql functions) seems to make it impossible to do anything useful in terms of dynamic allocation of memory. What would be more reasonable to try for is a per-query upper limit on space consumption. That at least avoids any concurrency issues and reduces it to a pure planning problem. However, I don't see any real good way to do that either. With the bottom-up planning process we use, the cost of (say) a first-level sort must be assigned before we know whether any additional sorts or hashes will be needed at upper levels. I thought a little bit about assuming that one workspace would be needed per input relation --- that is, if there are N relations in the query then set SortMem to TotalQueryMem/N. But this would severely penalize plans that need fewer workspaces than that. Another tack is to let the planner assume SortMem per workspace but at executor start (where we could know the number of plan nodes that actually need workspaces) set the effective SortMem to TotalQueryMem/N. The trouble with this is you could end up with a severely nonoptimal plan, eg a sort or hash being done in much too little space. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster