"Kevin Grittner" <kevin.gritt...@wicourts.gov> writes: > Tom Lane <t...@sss.pgh.pa.us> wrote: >> It might be better to try a test case with lighter-weight objects, >> say 5 million simple functions. > Said dump ran in about 45 minutes with no obvious stalls or > problems. The 2.2 GB database dumped to a 1.1 GB text file, which > was a little bit of a surprise.
Did you happen to notice anything about pg_dump's memory consumption? For an all-DDL case like this, I'd sort of expect the memory usage to be comparable to the output file size. Anyway this seems to suggest that we don't have any huge problem with large numbers of archive TOC objects, so the next step probably is to look at how big a code change it would be to switch over to TOC-per-blob. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers