"Joshua D. Drake" <[EMAIL PROTECTED]> writes: > We ran into a problem with a customer this weekend. They had >128,000 > tables and we were trying to run a pg_dump. When we reached > max_locks_per_transaction, the dump just hung waiting to lock the next > table.
> Would it make sense to have some sort of timeout for that? I don't think you have diagnosed this correctly. Running out of lock table slots generates an "out of shared memory" error, with a HINT that you might want to increase max_locks_per_transaction. If you can prove otherwise, please supply a test case. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq