Thom Brown wrote:
> It's a shame I can't optimise it though as the real case that runs
> is with a limit of 4000 which takes a long time to complete.
Perhaps you should post the real case.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
On 28 May 2010 19:54, Tom Lane wrote:
> Thom Brown writes:
>> I get this:
>
>> Limit (cost=0.00..316895.11 rows=400 width=211) (actual
>> time=3.880..1368.936 rows=400 loops=1)
>> -> GroupAggregate (cost=0.00..41843621.95 rows=52817 width=211)
>> (actual time=3.872..1367.048 rows=400 loops=
Thom Brown writes:
> I get this:
> Limit (cost=0.00..316895.11 rows=400 width=211) (actual
> time=3.880..1368.936 rows=400 loops=1)
>-> GroupAggregate (cost=0.00..41843621.95 rows=52817 width=211)
> (actual time=3.872..1367.048 rows=400 loops=1)
> -> Index Scan using "binaryID_25
I'm using PostgreSQL 9.0 beta 1. I've got the following table definition:
# \d parts_2576
Table "public.parts_2576"
Column | Type |
Modifiers
++--