Hi Peter
Thanks for the info & the entire forum for their inputs.... i did fireup a
pg_dump last night pairing it with gzip & split it to 1TB size.. will
let you all know how it goes.



On Sat, 16 May 2020, 18:12 Peter J. Holzer, <hjp-pg...@hjp.at> wrote:

> On 2020-05-15 14:02:46 +0100, Rory Campbell-Lange wrote:
> > On 15/05/20, Suhail Bamzena (suhailsa...@gmail.com) wrote:
> > > I have very recently inherited an 18 TB DB that is running version 9.2.
> > > Apparently this database has never been backed up
> [...]
> > A very simple solution could be just to dump the database daily with
> > pg_dump, if you have the space and machine capacity to do it. Depending
> > on what you are storing, you can achieve good compression with this, and
> > it is a great way of having a simple file from which to restore a
> > database.
> >
> > Our ~200GB cluster resolves to under 10GB of pg_dump files, although
> > 18TB is a whole different order of size.
>
> I love pg_dump (especially the -Fd format), but for a database of that
> size it might be too slow. Ours is about 1TB, and «pg_dump --compress=5
> -Fd»
> takes a bit over 2 hours. Extrapolating to 18 TB that would be 40 hours
> ...
>
> And restoring the database takes even more time because it only restores
> the tables and has to rebuild the indexes.
>
> Still - for a first backup, just firing off pg_dump might be the way to
> go. Better to have a backup in two days than still none after two weeks
> because you are still evaluating the fancier alternatives.
>
>         hp
>
> --
>    _  | Peter J. Holzer    | Story must make more sense than reality.
> |_|_) |                    |
> | |   | h...@hjp.at         |    -- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |       challenge!"
>

Reply via email to