On Tue, Apr 24, 2018 at 10:50 AM, David Gauthier <davegauthie...@gmail.com>
wrote:

> Typically, I would think doing a weekly full backup, daily incremental
> backups and turn on journaling to capture what goes on since the last
> backup.
>

This is almost the whole concept of the streaming replication built into
postgres, except you are not applying the stream but archiving it. If you
have atomic file system snapshots, you can implement this strategy along
the lines of marking the DB snapshot for binary backup, snapshot the file
system, then copy that snapshot file system off to another system (locally
or off-site), meanwhile you accumulate the log files just as you would for
streaming replication. Once the copy is done, you can release the file
system snapshot and continue to archive the logs similarly to how you would
send them to a remote system for being applied. You just don't apply them
until you need to do the recovery.

Or just set up streaming replication to a hot-standby, because that's the
right thing to do. For over a decade I did this with twin servers and
slony1 replication. The cost of the duplicate hardware was nothing compared
to not having downtime.

Reply via email to