Simon Riggs <si...@2ndquadrant.com> writes: > Recent work on parallel query has opened my eyes to exactly how > frequently we request locks on various catalog tables. (Attached file > is a lock analysis on a representative Pg server).
> Given these are catalog tables, we aren't doing much to them that > requires a strong lock. Specifically, only CLUSTER and VACUUM FULL > touch those tables like that. When we do that, pretty much everything > else hangs, cos you can't get much done while fundamental tables are > locked. So don't do that --- I'm not aware that either operation is ever considered recommended on catalogs. > So my proposal is that we invent a "big catalog lock". The benefit of > this is that we greatly reduce lock request traffic, as well as being > able to control exactly when such requests occur. (Fine grained > locking isn't always helpful). > Currently, SearchCatCache() requests locks on individual catalog > tables. Alternatively, we could request an AccessShareLock on a "big > catalog lock" that must be accessed first before a strong > relation-specific lock is requested. We just need to change the lockid > used for each cache. I doubt that this can ever be safe, because it will effectively assume that all operations on catalog tables are done by code that knows that it is accessing a catalog. What about manual DML, or even DDL, on a catalog? Miss even one place that can modify a table, and you have a problem. More to the point, how would using a big lock not make the contention situation *worse* rather than better? At least if you decide you need to cluster pg_statistic, you aren't blocking sessions that don't need to touch pg_statistic --- and furthermore, they aren't blocking you. I think the proposal would render it completely impossible to ever get a strong lock on a catalog table in a busy system, not even a little-used catalog. In fact, since we can assume that a transaction trying to do "CLUSTER pg_class" will have touched at least one syscache during startup, this proposal would absolutely guarantee that would fail (even in a completely idle system) because it would already hold the BigLock, and that would have to be seen as existing use of the table. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers