On 02/04/2012 18:06, Tom Lane wrote:
> Ronan Dunklau <rdunk...@gmail.com> writes:
>> I'm trying to define a "weighted mean" aggregate using postgresql create
>> aggregate feature.
>
>> I've been able to quickly write the required pgsql code to get it
>> working, but after testing it on a sample 10000 rows table, it seems to
>> be approximately 6 to 10 times slower than pure sql.
>
> It might help to use a two-element array for the transition state,
> instead of a custom composite type.

It does not change much.
It seems that altering the transition state instead of building a new
one does help, though. When altering the state, the composite type
version seems to be faster. But it still much slower than the
hand-written sql version.

>
>> My initial implementation was in pl/pgsql, and did not mark the
>> functions as immutable. I did so after a suggestion from an irc user,
>> but it did not change anything performance wise.
>
> Those suggestions would possibly help for a function that's meant to be
> inlined into larger SQL expressions, but they won't do much for an
> aggregate support function.  I'm not real sure, but I think plpgsql
> might be faster in this context.
>
> Another thing to think about is whether you really need type numeric
> here.  float8 would be a lot faster ... though you might have roundoff
> issues.

The "hand-written" sql using only built-in functions performs really
well with numerics. Why do you suggest that it could be the bottleneck ?

I solved the problem by writing a C extension for it:

http://pgxn.org/dist/weighted_mean/1.0.0/

Regards,

-- 
Ronan Dunklau

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to