On Sat, Jun 26, 2010 at 11:03 AM, Martijn van Oosterhout <klep...@svana.org> wrote: > On Fri, Jun 25, 2010 at 03:15:59PM -0400, Robert Haas wrote: >> A >> refinement might be to try to consider an inferior plan that uses less >> memory when the system is tight on memory, rather than waiting. But >> you'd have to be careful about that, because waiting might be better >> (it's worth waiting 15 s if it means the execution time will decrease >> by > 15 s). > > I think you could go a long way by doing something much simpler. We > already generate multiple plans and compare costs, why not just include > memory usage as a cost? If you start doing accounting for memory across > the cluster you can assign a "cost" to memory. When there are only a > few processes running it's cheap and you get plans like now. But as the > total memory usage increases you increase the "cost" of memory and > there will be increased pressure to produce lower memory usage plans. > > I think this is better than just cutting plans out at a certain > threshold since it would allow plans that *need* memory to work > efficiently will still be able to.
That's an interesting idea. > (It doesn't help in situations where you can't accurately predict > memory usage, like hash tables.) Not sure what you mean by this part. We already predict how much memory a hash table will use. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers