Thanks. what is the recommended command/options for backup and how to
restore?

I found the below online. let me know if this is better and how to restore.
Thank you

pg_dump -Fc  '<Db-Name>' | xz -3 dump.xz


On Fri, Oct 16, 2015 at 4:05 AM, Francisco Olarte <fola...@peoplecall.com>
wrote:

> On Fri, Oct 16, 2015 at 8:27 AM, Guillaume Lelarge
> <guilla...@lelarge.info> wrote:
> > 2015-10-15 23:05 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>:
> >> On 10/15/2015 01:35 PM, anj patnaik wrote:
> ...
> >>> ./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump
> >>> Are there any other options for large tables to run faster and occupy
> >>> less disk space?
> >> Yes, do not double compress. -Fc already compresses the file.
> > Right. But I'd say "use custom format but do not compress with pg_dump".
> Use
> > the -Z0 option to disable compression, and use an external multi-threaded
> > tool such as pigz or pbzip2 to get faster and better compression.
>
> Actually I would not recommend that, unless you are making a long term
> or offsite copy. Doing it means you need to decompress the dump before
> restoring or even testing it ( via i.e., pg_restore > /dev/null ).
>
> And if you are pressed on disk space you may corner yourself using
> that on a situation where you do NOT have enough disk space for an
> uncompressed dump. Given you normally are nervous enough when
> restoring, for normal operations I think built in compression is
> better.
>
> Also, I'm not current with the compressor Fc uses, I think it still is
> gzip, which is not that bad and is normally quite fast ( In fact I do
> not use that 'pbzip2', but I did some tests about a year ago and I
> found bzip2 was beaten by xz quite easily ( That means on every level
> of bzip2 one of the levels of xz beat it in BOTH size & time, that was
> for my data, YMMV  ).
>
>
> Francisco Olarte.
>

Reply via email to