I think there's a more useful question, which is why do you want to do
this?  If it is just about conditional backups, surely the cost of backup
storage is low enough, even in S3 or the like, that a duplicate backup is
an afterthought from a cost perspective? Before you start jumping through
hoops to make your backups conditional, I'd first do some analysis and
figure out what the real cost of the thing I'm trying to avoid actually is,
since my guess is that you are deep into a premature optimization
<http://wiki.c2.com/?PrematureOptimization> here, where either the cost of
the duplicate backup isn't consequential or the frequency of duplicate
backups is effectively 0.  It would always be possible to run some kind of
checksum on the backup and skip storing it if it matches the previous
backup's checksum if you decide that there truly is value in conditionally
backing up the db.  Sure, that would result in dumping a db that doesn't
need to be dumped, but if your write transaction rate is so low that
backups end up being duplicates on a regular basis, then surely you can
afford the cost of a pg_dump without any significant impact on performance?

On Mon, Dec 11, 2017 at 10:49 AM, Andreas Kretschmer <
andr...@a-kretschmer.de> wrote:

>
>
> Am 11.12.2017 um 18:26 schrieb Andreas Kretschmer:
>
>> it's just a rough idea...
>>
>
> ... and not perfect, because you can't capture ddl in this way.
>
>
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>

Reply via email to