2017-12-20 21:11 GMT+01:00 Robert Haas <robertmh...@gmail.com>:

> On Tue, Dec 19, 2017 at 5:37 PM, Tomas Vondra
> <tomas.von...@2ndquadrant.com> wrote:
> > On 12/18/2017 11:18 AM, Anastasia Lubennikova wrote:
> >> 1. Use file modification time as a marker that the file has changed.
> >> 2. Compute file checksums and compare them.
> >> 3. LSN-based mechanisms. Backup pages with LSN >= last backup LSN.
> >> 4. Scan all WAL files in the archive since the previous backup and
> >> collect information about changed pages.
> >> 5. Track page changes on the fly. (ptrack)
> >
> > I share the opinion that options 1 and 2 are not particularly
> > attractive, due to either unreliability, or not really saving that much
> > CPU and I/O.
> >
> > I'm not quite sure about 3, because it doesn't really explain how would
> > it be done - it seems to assume we'd have to reread the files. I'll get
> > back to this.
> >
> > Option 4 has some very interesting features. Firstly, relies on WAL and
> > so should not require any new code (and it could, in theory, support
> > even older PostgreSQL releases, for example). Secondly, this can be
> > offloaded to a different machine. And it does even support additional
> > workflows - e.g. "given these two full backups and the WAL, generate an
> > incremental backup between them".
> >
> > So I'm somewhat hesitant to proclaim option 5 as the clear winner, here.
>
> I agree.  I think (4) is better.
>

Can depends on load? For smaller intensive updated databases the 5 can be
optimal, for large less updated databases the 4 can be better.

Regards

Pavel


> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>

Reply via email to