Hello!
I update the databases to version 9.1.
Today, faced with a new challenge.
The database itself is not very big 60G, but has ~ 164,000 tables in 1260 
schemes.

I tried and pg_upgrade and pg_dumpall and pg_dump.
But they all work very, very long time.
For so long that I do not have patience.
And pg_dump worked for almost a day, gave "out off memory" 
(but this experience made a colleague, so vouch for the accuracy can not).

I tried to just dump schemes-only, but also the procedure lasted more than 17 
hours and not over .... I ran out of patience.

If I am not mistaken, all of these programs work on approximately the same 
algorithm. 
At the beginning of block all the tables in all schemas, indexes, and so 
then....

So I do not see a way to significantly speed up the process.

The only thing that came up while doing a dump on each schema.
But will it be the right approach?
Maybe tell me what to do?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to