2010/10/1 Tom Lane <t...@sss.pgh.pa.us>:
> Hitoshi Harada <umi.tan...@gmail.com> writes:
>> 2010/10/1 Tom Lane <t...@sss.pgh.pa.us>:
>>> If this patch tries to force the entire sort to happen in memory,
>>> it is not committable.
>
>> What about array_agg()? Doesn't it exceed memory even if the huge data come 
>> in?
>
> Yeah, but for array_agg the user should be expecting a result of
> approximately the size of the whole input, so if he overruns memory he
> hasn't got a lot of room to complain.  There is no reason for a user to
> expect that median or percentile will fall over on large input, and
> every reason to expect them to be more robust than that.

So it's particular problem of *median* but not the idea of
on-memory-guaranteed tuplesort.

If this way is not committable, one of alternatives is to implement
median as a window function rather than an aggregate. But the big
problem of this is that it's impossible to have two
same-input-same-name functions of aggregate and window. AFAIK they are
ambiguous at parser stage. So we have to have median() for aggregate
and something like median_w() over (). This is worse idea, I feel.

Another way is to modify nodeWindowAgg in some way, but I cannot wrap
up how to. To call some destructor in the end of partition somehow,
but this is out of the current aggregate system.

The bottom-line is to throw an error from median in window aggregate,
but personally I would like to see median in window aggregate, which
is quite smart.

Another suggestion?

Regards,


-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to