On 27 Jan 2011, at 15:04, Michael Kemanetzis wrote: > Hello, I'm experiencing a strange behavior running a simple select query on a > table that has about 12 million rows. Specifically, changing the "LIMIT" > value seems to change the execution plan but the result in one of the cases > is unjustifiably slow, as if it ignores all indexes. > > The table structure, the queries used and the results are here: > http://pastebin.com/fn36BuKs > > Is there anything I can do to improve the speed of this query?
What does explain analyse say about query B? According to the query plan there are about 30k rows with veh_id = 3. From the amount of disk I/O you describe it would appear that the rows corresponding to that ID are all over the place. I expect that clustering that table on the veh_id index will help in that case. It does seem a bit strange that the planner is choosing an index scan for 30k records, I'd expect a sequential scan to be more efficient. That seems to be another indication that your records are very fragmented with regards to the veh_id. That, or you are running out of memory (the setting to look at is work_mem I think). Did you change any settings from the defaults? BTW, 12M records isn't considered a large table by many Postgres users. It's certainly not small though, I suppose "average" fits the bill. Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll see there is no forest. !DSPAM:737,4d41b62211732046819744! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general