Em 12/12/2017 10:14, marcelo escreveu:
Hi Sam
You are right, and here are the reason behind my question: The server
where postgres will be installed is not on 24/7. It turns on in the
morning and goes off at the end of the day. The idea is that, as part
of the shutdown process, a local backup is made. The next day, that
backup will be copied to the cloud.
In order not to lengthen the shutdown process, we are trying to limit
pg_dump to the databases that have had some change, not so much in
their schema as in their data.
Of course, to add a trigger for every table and CUD operation on every
database is not an option.
On 11/12/17 23:23, Sam Gendler wrote:
I think there's a more useful question, which is why do you want to
do this? If it is just about conditional backups, surely the cost of
backup storage is low enough, even in S3 or the like, that a
duplicate backup is an afterthought from a cost perspective? Before
you start jumping through hoops to make your backups conditional, I'd
first do some analysis and figure out what the real cost of the thing
I'm trying to avoid actually is, since my guess is that you are deep
into a premature optimization
<http://wiki.c2.com/?PrematureOptimization> here, where either the
cost of the duplicate backup isn't consequential or the frequency of
duplicate backups is effectively 0. It would always be possible to
run some kind of checksum on the backup and skip storing it if it
matches the previous backup's checksum if you decide that there truly
is value in conditionally backing up the db. Sure, that would result
in dumping a db that doesn't need to be dumped, but if your write
transaction rate is so low that backups end up being duplicates on a
regular basis, then surely you can afford the cost of a pg_dump
without any significant impact on performance?
On Mon, Dec 11, 2017 at 10:49 AM, Andreas Kretschmer
<andr...@a-kretschmer.de <mailto:andr...@a-kretschmer.de>> wrote:
Am 11.12.2017 um 18:26 schrieb Andreas Kretschmer:
it's just a rough idea...
... and not perfect, because you can't capture ddl in this way.
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com <http://www.2ndQuadrant.com>
Hi, there are plenty of options for secure and optimized bacup solutions
for your scenario.
Since you want to backup to the cloud, why not to use pgBarman with
diferential backup + log (wal) shipping?
It is perfect, you will have zero downtime, and you can shutdown your
database anytime you want.
In my experience, diferential backups (with rsync and symlinks) provide
excellent performance and reduced storage (avoid duplications), and
works perfectly well, as it also provides automatic rotation for old
backups (you define the rules from a set of options).
I've been using pgBarman since 1.4, and I'm very satisfied with it.
Just my 2c,
Edson Richter