2010/1/24 Tom Lane <t...@sss.pgh.pa.us>: > Simon Riggs <si...@2ndquadrant.com> writes: >> On Fri, 2010-01-22 at 11:14 -0800, David E.Wheeler wrote: >>> No performance issues > >> ISTM that this class of function is inherently dangerous performance >> wise. > >> * It looks incredibly easy to construct enormous lists. We should test >> the explosion limit of this to see how it is handled. Perhaps we need >> some parameter limits to control that, depending upon results. > >> * Optimizer doesn't consider whether the result type of an aggregate get >> bigger as the aggregate processes more rows. If we're adding this >> function we should give some thought in that area also, or at least a >> comment to note that it can and will cause the optimizer problems in >> complex queries. > > We have that problem already with array_agg(), and I don't recall many > complaints about it. It might be worth worrying about at some point, > but I don't think it's reasonable to insist that it be fixed before > any more such aggregates are created. > > I agree that testing-to-failure would be a good idea just to be sure it > fails cleanly.
postgres=# \timing Timing is on. postgres=# select pg_size_pretty(length(string_agg('012345678901234567890'::text,','))) from generate_series(1,10000000) g(i); pg_size_pretty ---------------- 210 MB (1 row) Time: 5831,218 ms postgres=# select pg_size_pretty(length(string_agg('012345678901234567890'::text,','))) from generate_series(1,50000000) g(i); ^[^[ERROR: out of memory DETAIL: Cannot enlarge string buffer containing 1073741812 bytes by 21 more bytes. postgres=# I thing, so 210 MB is more then is necessary :) Regards Pavel Stehule > > regards, tom lane > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers