The following bug has been logged on the website: Bug reference: 7665 Logged by: David Popiashvili Email address: dato0...@hotmail.com PostgreSQL version: 9.2.1 Operating system: Windows 8 x64 Description:
The problem is in LIMIT keyword and how it affects query planner. I have a database with 6 tables and ~450 million rows distributed among them. When I run the following query: select * from "Payments" as p inner join "PaymentOrders" as po on po."Id" = p."PaymentOrderId" inner join "Users" as u On u."Id" = po."UserId" INNER JOIN "Roles" as r on u."RoleId" = r."Id" Where r."Name" = 'Moses' LIMIT 1000 It completes in a short period of time (several milliseconds), but if I modify the where clause as r."Name" = 'SomeNonExistentName' the query takes a very very long time to complete. You can see EXPLAIN ANALYZE details here: http://explain.depesz.com/s/7e7 >From the discussion with other community members on StackOverflow(http://stackoverflow.com/questions/13407555/postgresql-query-taking-too-long/13415984), we think that due to the fact that the query has LIMIT keyword, PostgreSQL always assumes that it will find rows with specified where clause soon enough and that's why it is doing a seq scan on the tables, which is wrong. As I already mentioned, when r."Name" = 'Something' yield no match, the query takes too long, it scans several hundred millions of rows. The cure is to just remove LIMIT keyword from the query, and we will get the following query plan: http://explain.depesz.com/s/0wOq however, this is not always an option. Here's the full postgresql.conf file I'm running https://dl.dropbox.com/u/3055964/postgresql.conf I've been recommended to let you know about the issue. In case of questions, don't hesitate to contact me. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs