Stephen Frost <sfr...@snowman.net> writes: > * Tom Lane (t...@sss.pgh.pa.us) wrote: >> If we were actually trying to support such large allocations, >> what I'd be inclined to do is introduce a separate call along the lines >> of MemoryContextAllocLarge() that lacks the safety check.
> This sounds like the right approach to me. Basically, I'd like to have > MemoryContextAllocLarge(), on 64bit platforms, and have it be used for > things like sorts and hash tables. We'd need to distinguish that usage > from things which allocate varlena's and the like. Yes, but ... >> But before >> expending time on that, I'd want to see some evidence that it's actually >> helpful for production situations. I'm a bit dubious that you're going >> to gain much here. > I waited ~26hrs for a rather simple query: The fact that X is slow does not prove anything about whether Y will make it faster. In particular I see nothing here showing that this query is bumping up against the 1GB-for-sort-pointers limit, or that if it is, any significant gain would result from increasing that. I think the only real way to prove that is to hack the source code to remove the limit and see what happens. (You could try using malloc directly, not palloc at all, to have a non-production-quality but very localized patch to test.) BTW, it sounded like your argument had to do with whether it would use HashAgg or not -- that is *not* dependent on the per-palloc limit, and never has been. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers