On Thu, 03 Jul 2014 10:04:12 -0700
David Wall <d.w...@computer.org> wrote:

> I'm running PG 9.3.4 on CentOS 6.4 and noted that backing up my
> database takes much longer than restoring it.
> 
> That seems counter-intuitive to me because it seems like reading from
> a database should generally be faster than writing to it.
> 
> I have a database that pg_database_size reports as 18GB, and
> resulting dump is about 13GB in 27 files (split creates them as
> 512MB).
> 
> A pg_dump backup -- with most of the data stored as large objects -- 
> takes about 5 hours.
> 
> But restoring that dump takes about 2 hours.  So it's taking 2.5
> times longer to back it up than to restore it.
> 
> My backup script runs vacuumlo, then vacuum, then analyze, then
> pg_dump 
> --format=c --oids $DB
> 
> I actually push pg_dump output through gzip, gpg and split on 512MB 
> files, but they shouldn't matter too much I figure as I have to run
> cat, gpg and gunzip before pg_restore.  In fact, my restore should
> have been at a disadvantage because I used '-v' and showed the
> results to my ssh term over the Internet which includes a line for
> each LOID, and the postgresql.conf had 'ddl' logging on (which I
> suspect I can turn off in future restores to speed things up a bit).
> 
> Is there something that might be wrong about my configuration that
> the backup is slower than the restore?

No, there's nothing wrong. All transparent compressed objects stored in 
database, toast, lo, etc.. is transparently decompressed while pg_dump access 
them and then you gzip it again. I don't know why it doesn't dump the 
compressed data directly.

> 
> Thanks,
> David


---   ---
Eduardo Morras <emorr...@yahoo.es>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to