On 2013-05-06 13:07:17 -0400, Tom Lane wrote: > I'm afraid that this is institutionalizing a design deficiency in > pg_dump; namely that it takes its snapshot before acquiring locks.
I have suggested this before, but if pg_dump would use SELECT FOR SHARE in the queries it uses to build DDL it would detect most if not all modifications for most database objects including tables. Sure, it would error out, but thats far better than a silently corrupt dump: S1: =# CREATE TABLE testdump(); S2: =# BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; S2: =# SELECT count(*) FROM pg_class; --acquire snapshot S1: =# ALTER TABLE testdump ADD COLUMN a text; S2: =# -# SELECT * FROM pg_class cls -# JOIN pg_attribute att ON (cls.oid = att.attrelid) -# WHERE cls.oid = 'testdump'::regclass FOR UPDATE ERROR: could not serialize access due to concurrent update The serialization failure could be caught and translated into some error message explaining that concurrent ddl prevented pg_dump from working correctly. I don't immediately see a case where that would prevent valid backups from working. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers