On Mon, Nov 18, 2013 at 07:02:21PM +0530, Mayuresh wrote: > Like keeping a local encrypted replica of each file and syncing this > replica with remote?
Anyone tried duplicity? It fits the bill. It creates encrypted chunks and still minimizes the network traffic for incremental backups, just the way rsync does. Importantly, it does not expect the server to support any special protocol, can work with scp, rsync, ftp, amazon s3 and perhaps more, and of course with local filesystem as the target. I might use it differently, by creating the chunks locally and rsync-ing alternate ones on different cloud storage services if I want additional security. Due to incremental storage style, it can recover previous snapshots by timestamp or can restore backups in parts as well. The only drawback (rather apparent bug) I find is: 1. Create a backup. Say now you realize you accidentally left a large file in the backed up directory which you didn't want to backup. 2. Now delete the large file and do an incremental backup. 3. Now use CLI options (see remove*) to eliminate previous snapshot (use extra-clean, force whatever with it). I don't find the backup space reducing by deleting those extra chunks. Barring this issue I find this to be a good solution. Web searches show discussions on this issue and some claim additional options like --extra-clean --force etc worked for them, though nothing has worked for me really. If anyone gets a chance to try above scenario and succeeds in trimming the backup for above scenario, please update on the thread. Mayuresh. _______________________________________ Pune GNU/Linux Users Group Mailing List