On Mon, Jun 29, 2020 at 8:29 AM Bruce Momjian <br...@momjian.us> wrote:
> Is this something we want to codify for all node types,
> i.e., choose a non-spill node type if we need a lot more than work_mem,
> but then let work_mem be a soft limit if we do choose it, e.g., allow
> 50% over work_mem in the executor for misestimation before spill?  My
> point is, do we want to use a lower work_mem for planning and a higher
> one in the executor before spilling.

Andres said something about doing that with hash aggregate, which I
can see an argument for, but I don't think that it would make sense
with most other nodes. In particular, sorts still perform almost as
well with only a fraction of the "optimal" memory.

> My second thought is from an earlier report that spilling is very
> expensive, but smaller work_mem doesn't seem to hurt much.

It's not really about the spilling itself IMV. It's the inability to
do hash aggregation in a single pass.

You can think of hashing (say for hash join or hash aggregate) as a
strategy that consists of a logical division followed by a physical
combination. Sorting (or sort merge join, or group agg), in contrast,
consists of a physical division and logical combination. As a
consequence, it can be a huge win to do everything in memory in the
case of hash aggregate. Whereas sort-based aggregation can sometimes
be slightly faster with external sorts due to CPU caching effects, and
because an on-the-fly merge in tuplesort can output the first tuple
before the tuples are fully sorted.

> Would we
> achieve better overall performance by giving a few nodes a lot of memory
> (and not spill those), and other nodes very little, rather than having
> them all be the same size, and all spill?

If the nodes that we give more memory to use it for a hash table, then yes.

-- 
Peter Geoghegan


Reply via email to