On 2012-12-12 18:52:33 -0500, Tom Lane wrote:
> Andres Freund <and...@2ndquadrant.com> writes:
> > On 2012-12-12 12:13:44 +0100, Andres Freund wrote:
> >> This morning I wondered whether we couldn't protect against that by
> >> acquiring share locks on the catalog rows pg_dump reads, that would
> >> result in "could not serialize access due to concurrent update" type of
> >> errors which would be easy enough discernible/translateable.
> >> While pretty damn ugly that should take care of most of those issues,
> >> shouldn't it?
>
> How would it fix anything?  The problem is with DDL that's committed and
> gone before pg_dump ever gets to the table's pg_class row.  Once it
> does, and takes AccessShareLock on the relation, it's safe.  Adding a
> SELECT FOR SHARE step just adds more time before we can get that lock.

Getting a FOR SHARE lock ought to error out with a serialization failure
if the row was updated since our snapshot started as pg_dump uses
repeatable read/serializable. Now that obviously doesn't fix the
situation, but making it detectable in a safe way seems to be good
enough for me.

> Also, locking the pg_class row doesn't provide protection against DDL
> that doesn't modify the relation's pg_class row, of which there is
> plenty.

Well, thats why I thought of pg_class, pg_attribute, pg_type. Maybe that
list needs to get extended a bit, but I think just those 3 should detect
most dangerous situations.

Greetings,

Andres Freund

--
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to