On Mar 17, 2008, at 9:55 AM, Tom Lane wrote:

"Kynn Jones" <[EMAIL PROTECTED]> writes:
I'm leaning towards the re-design option, primarily because I really don't really understand the consequences of cranking up max_locks_per_transaction. E.g. Why is its default value 2^6, instead of, say, 2^15? In fact, why is
there a ceiling on the number of locks at all?

Because the size of the lock table in shared memory has to be set at
postmaster start.

There are people running DBs with a couple hundred thousand tables,
but I don't know what sorts of performance problems they face when
they try to run pg_dump.  I think most SQL experts would suggest
a redesign: if you have lots of essentially identical tables the
standard advice is to fold them all into one table with one more
key column.

That's me! Our dumps currently take about 36 hours but what's more alarming is that vanilla restore takes about 4 days. And, yes, a redesign is currently in the works :) However, for Kynn's case, I doubt he'll have too much trouble with 35000 tables as long as that number stays fairly static and his design doesn't rely on that number growing, which is what we currently have.

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to