On Sun, 2015-06-14 at 16:21 -0400, Tom Lane wrote: > Meh. HashAgg could track its memory usage without loading the entire > system with a penalty.
When I tried doing it outside of the MemoryContexts, it seemed to get awkward quickly. I'm open to suggestion if you can point me in the right direction. Maybe I can peek at the sizes of chunks holding state values and group keys, and combine that with the hash table size-estimating code? > Moreover, this is about fourth or fifth down the > list of the implementation problems with spilling hash aggregation to > disk. It would be good to see credible solutions for the bigger issues > before we buy into adding overhead for a mechanism with no uses in core. I had several iterations of a full implementation of the spill-to-disk HashAgg patch[1]. Tomas Vondra has some constructive review comments, but all of them seemed solvable. What do you see as a major unsolved issue? If I recall, you were concerned about things like array_agg, where an individual state could get larger than work_mem. That's a valid concern, but it's not the problem I was trying to solve. Regards, Jeff Davis [1] http://www.postgresql.org/message-id/1407706010.6623.16.camel@jeff-desktop -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers