Stephen Frost <sfr...@snowman.net> writes:
> * Robert Haas (robertmh...@gmail.com) wrote:
>> But in Rushabh's example, he's not doing that.  He's trying to do a
>> full-database dump of a database that contains one object which the
>> dump user has rights to access.  Previously, that worked.  Now, it
>> fails with an error about a system catalog.  How is that not broken?

> As I mentioned up-thread, the optimization to skip tables which are not
> "interesting" has been improved in the patch-set posted this morning to
> skip over tables whose ACLs haven't changed from the defaults.  With
> that patch, we will skip over catalog tables whose ACLs haven't been
> changed and Rushabh's command will work as a non-superuser, assuming
> none of the ACLs on tables in pg_catalog have been changed.

> However, if any of the ACLs have been changed on tables in pg_catalog,
> we'll attempt to lock those tables and include those ACLs.  That will
> still work in many cases as you only need SELECT access to be able to
> lock a table in access share mode, but if the permissions on pg_authid
> are changed, the same failure will occur.

I think this is a bad idea, not only because of the issue about
permissions failures, but because the more things pg_dump locks
the greater risk it has of deadlock failures against other sessions.

Why is it that we need to lock a table at all if we're just going to dump
its ACL?  I understand the failure modes that motivate locking when we're
going to dump data or schema, but the ACL is not really subject to that
kind of problem: we are going to see a unitary, unchanging view of
pg_class.relacl in our snapshot, and we aren't relying on any server-side
logic to interpret that AFAIR.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to