Is there a workaround?
To us this is pretty bad news; we receive updates from several partners and 
constantly update the counts like in the example I sent you...
Obviously we can serialize the updates but that would be pretty sad thing to do 
in a database.
Realistically - when will we see this fixed (I understand it has pretty low 
priority...) ?

Thanks a bunch for your time,

Dan Boeriu
Senior Architect - Roost.com
P: (415) 742 8056
Roost.com - 2008 Inman Award Winner for Most Innovative New Technology




-----Original Message-----
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Thu 7/30/2009 2:34 PM
To: Dan Boeriu
Cc: Robert Haas; Craig Ringer; PostgreSQL bugs
Subject: Re: [BUGS] BUG #4945: Parallel update(s) gone wild 
 
"Dan Boeriu" <dan.boe...@roost.com> writes:
> Attached is the reproducible test case - I was able to reproduce the problem 
> on 32 and 64 bit 8.3.6 and 8.4.0 RedHat 5.3 kernel 2.6.18-128.1.16.el5 #1 SMP

I looked at this a bit.  It's the same issue discussed at
http://archives.postgresql.org/pgsql-bugs/2008-09/msg00045.php
namely, that the second update finds itself trying to update a large
number of tuples that were already updated since its snapshot was taken.
That means it has to re-verify that the updated versions of those tuples
meet its WHERE qualification.  That's done by a function EvalPlanQual
that's pretty darn inefficient for complex queries like this one.
It's essentially redoing the join (and recomputing the whole sub-SELECT)
for each row that needs to be updated.

Someday I'd like us to redesign that mechanism, but don't hold
your breath ...

                        regards, tom lane

Reply via email to