I got an interesting question, and I can probably see both sides of any debate, but.....
Say you have a fairly large table, several million records. In this table you have a key that has a fairly good number of duplicate rows. It is a users favorites table, each user will have a number of entries. My problem is, if you do a select by the user name, it does an index scan. If you do a select from the whole table, ordered by the user name, it does a sequential scan not an index scan. It is arguable that this may be a faster query, but at the cost of many more resources and a very long delay before any results are returned. Is this the best behavior? Anyone have any opinions? cdinfo=# select count(*) from favorites ; count --------- 4626568 (1 row) cdinfo=# explain select * from favorites where scene_name_full = 'someone' ; NOTICE: QUERY PLAN: Index Scan using fav_snf on favorites (cost=0.00..258.12 rows=64 width=73) EXPLAIN cdinfo=# explain select * from favorites order by scene_name_full ; NOTICE: QUERY PLAN: Sort (cost=1782827.92..1782827.92 rows=4724736 width=73) -> Seq Scan on favorites (cost=0.00..113548.36 rows=4724736 width=73) EXPLAIN cdinfo=# set enable_seqscan=FALSE ; SET VARIABLE cdinfo=# explain select * from favorites order by scene_name_full ; NOTICE: QUERY PLAN: Index Scan using fav_snf on favorites (cost=0.00..18682771.23 rows=4724736 width=73) EXPLAIN ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html