> -Original Message-
> From: Craig Ringer [mailto:cr...@postnewspapers.com.au]
> Sent: Monday, June 06, 2011 5:08 PM
> To: mark
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] not exits slow compared to not in. (nested loops
> killing me)
>
> On 06/07/2011 04:38 AM, mark w
Craig Ringer writes:
> On 06/07/2011 04:38 AM, mark wrote:
> Hash Anti Join (cost=443572.19..790776.84 rows=1 width=1560)
> (actual time=16337.711..50358.487 rows=2196299 loops=1)
> Note the estimated vs actual rows. Either your stats are completely
> ridiculous, or the planner is confused.
The
On 06/07/2011 04:38 AM, mark wrote:
NOT EXISTS (with 64MB of work_mem)
http://explain.depesz.com/s/EuX
Hash Anti Join (cost=443572.19..790776.84 rows=1 width=1560)
(actual time=16337.711..50358.487 rows=2196299 loops=1)
Note the estimated vs actual rows. Either your stats are completely
ridi
Josh Berkus writes:
> Just got this simple case off IRC today:
> [ hashed versus non-hashed subplan ]
> I'm at a bit of a loss as to what's happening here.
Possibly work_mem is smaller in the second installation?
(If I'm counting on my fingers right, you'd need a setting of at least a
couple MB
Thanks for the suggestion, maintenance_work_mem is set to the default of
16MB on the host that was taking over an hour as well as on the host that
was taking less than 10 minutes. I tried setting it to 1GB on the faster
test server and it reduced the time from around 6-7 minutes to about 3:30.
th
All,
Just got this simple case off IRC today:
8.4.4
This plan completes in 100ms:
old_prod=# explain analyze select email from u_contact where id not in
(select contact_id from u_user);
QUERY PLAN
-
Hi all,
I am trying to speed up a query on a DB I inherited and I am falling
flat on my face .
I changed a query from NOT IN to use NOT EXISTS and my query time went
from 19000ms to several hours (~5000 ms). this shocked me so much
I pretty much had to post. This seems like a corner case of
Mike Broers writes:
> I am in the process of implementing cascade on delete constraints
> retroactively on rather large tables so I can cleanly remove deprecated
> data. The problem is recreating some foreign key constraints on tables of
> 55 million rows+ was taking much longer than the maintena
I originally posted this on admin, but it was suggested to post it to
performance so here goes -
I am in the process of implementing cascade on delete constraints
retroactively on rather large tables so I can cleanly remove deprecated
data. The problem is recreating some foreign key constraints o
First off, this is posted to the wrong list -- this list is for
discussion of development of the PostgreSQL product. There is a
list for performance questions where this belongs:
pgsql-performance@postgresql.org. I'm moving this to the
performance list with a blind copy to the -hackers list so pe
On 06.06.2011 12:43, Heikki Linnakangas wrote:
Also, even when safe, it's not clear that the transformation is always a
win. The left-hand expression could be expensive, in which case having
to evaluate it multiple times could hurt performance. Maybe yo
Sorry, hit "send" too early.
Maybe you c
On 15.03.2011 14:30, Chetan Suttraway wrote:
On Sun, Feb 27, 2011 at 2:43 AM, Josh Berkus wrote:
On 2/25/11 5:31 AM, Sam Wong wrote:
I found that "LIKE", "= ANY (...)", "LIKE .. OR LIKE .." against a text
field used the index correctly, but not "LIKE ANY (...)". Would that be a
bug?
No, it
On Thu, Feb 10, 2011 at 7:32 PM, Craig James wrote:
> On 2/10/11 9:21 AM, Kevin Grittner wrote:
>>
>> Shaun Thomas wrote:
>>
>>> how difficult would it be to add that syntax to the JOIN
>>> statement, for example?
>>
>> Something like this syntax?:
>>
>> JOIN WITH (correlation_factor=0.3)
>>
>> W
13 matches
Mail list logo