On 07/28/2016 04:58 PM, Joe Conway wrote:
On 07/28/2016 03:16 PM, Bruce Momjian wrote:

Not really true. I ran into two separate cases where on older (pre 9.3 I
believe) Postgres if you had hundreds of thousands of tables (in the
case I remember well, it was about 500k tables) the schema dump from the
old cluster basically never finished (ok, was killed after about a
week). I had to find the patch that fixed a good bit of the slowness and
backport it to the older version so we could successfully run pg_upgrade
(in something like 14 hours instead of 7+ days).

Correct, I don't know if it is still true but definitely pre 9.3, if you had lots and lots of tables, you were looking at very long times to actually start a dump. The thing is, although 500k tables is very rare, 10k tables isn't nearly as rare. That would still take entirely too long.

Sincerely,

jD

--
Command Prompt, Inc.                  http://the.postgres.company/
                        +1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to