-----Original Message-----
From: David Rowley <dgrowle...@gmail.com> 
Sent: Thursday, July 22, 2021 12:18
To: Peter Geoghegan <p...@bowt.ie>
Cc: Tom Lane <t...@sss.pgh.pa.us>; Jeff Davis <pg...@j-davis.com>; 
l...@laurent-hasson.com; Justin Pryzby <pry...@telsasoft.com>; 
pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3

On Fri, 23 Jul 2021 at 04:14, Peter Geoghegan <p...@bowt.ie> wrote:
>
> On Thu, Jul 22, 2021 at 8:45 AM Tom Lane <t...@sss.pgh.pa.us> wrote:
> > That is ... weird.  Maybe you have found a bug in the spill-to-disk 
> > logic; it's quite new after all.  Can you extract a self-contained 
> > test case that behaves this way?
>
> I wonder if this has something to do with the way that the input data 
> is clustered. I recall noticing that that could significantly alter 
> the behavior of HashAggs as of Postgres 13.

Isn't it more likely to be reaching the group limit rather than the memory 
limit?

if (input_groups * hashentrysize < hash_mem * 1024L) { if (num_partitions != 
NULL) *num_partitions = 0; *mem_limit = hash_mem * 1024L; *ngroups_limit = 
*mem_limit / hashentrysize; return; }

There are 55 aggregates on a varchar(255). I think hashentrysize is pretty big. 
If it was 255*55 then only 765591 groups fit in the 10GB of memory.

David



-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Hello,

So, FYI.... The query I shared is actually a simpler use case of ours 😊 We do 
have a similar pivot query over 600 columns to create a large flat tale for 
analysis on an even larger table. Takes about 15mn to run on V11 with strong 
CPU usage and no particular memory usage spike that I can detect via 
TaskManager. We have been pushing PG hard and simplify the workflows of our 
analysts and data scientists downstream.

Thank you,
Laurent.

Reply via email to