Dmitry Tkach <[EMAIL PROTECTED]> writes:Well... It *does* match (almost) all the rows (there are about a million rows before that key, and the remaining 79 mil after)...
Also, I have another copy (not exact copy, but identical schema, and similar content... but about twice smaller) of the original database... I tried my query on it, and it works right too.
So, there must be something wrong with that particular database I suppose...
Hmm. Take a look at the pg_stats statistics for the id columns in each case. Could the ones for the misbehaving tables be out of whack somehow? I'm wondering for example if the planner discounted the >= condition because it thought it would match all the rows.
The stats look in synch with that:
for a:
stavalues1 | {1000488,33495482,69111011,99286820,129611281,204441828,331968789,508451171,782660252,869480434,989787700}
for b:
stavalues1 | {1008692,54892364,110119463,192551141,300490851,389609207,465139533,570442801,706876547,849087358,989851076}
(The key in the criteria was 7901288 - somewhere in the first bucket)
*But* isn't my 'limit' clause supposed to affect that decision? I mean, even though the filter isn't very selective, it should still speed up getting the *first* match...
Thanks!
Dima
P.S. I also tried to look at the stats of that other database I mentioned... The stats for b look similar:
stavalues1 | {1028104,25100079,50685614,78032989,105221902,135832793,199827486,611968165,807597786,884897604,969971779}
But the stats for a are just *not there at all* (is it even possible?)
Could it be the reason why it works on that database (because it uses the default stats instead of the real thing)?
---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly