On Sun, 2007-09-23 at 17:46 -0400, Matt McCutchen wrote: > On 9/23/07, WebTent <[EMAIL PROTECTED]> wrote: > Not necessarily. Depending on how pg_dump works, it could be that > small changes to the database are resulting in unnecessarily large > changes to the dump. Make sure you are using the uncompressed format > because most compression algorithms defeat the delta-transfer > algorithm almost completely. Then you might take a look at two > consecutive dumps and check whether records common to both appear in > the same order in each dump. (If pg_dump is dumping the records in a > different order each time, that would also defeat the delta-transfer > algorithm because no block of several consecutive records could be > matched.)
Yeah, I understand what you're saying. What blows me away is that I have cwRsync keeping two MSSQL backups in sync and this works great. Using Microsoft SQL Server backup makes a binary compressed backup and it matches data very well and transfers take a fraction of their initial transfer. But this larger pgsql compressed backup is so different. I think I've already tried plain text backup, but I will run it tonight and if it still doesn't match any better, I'll see if I can find out why on the Postgresql list this week. Thanks again for the insight! -- Robert -- To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html