I'm using rsync, ssh, and cron glued together with Python as a push-based synchronization system. From a single location, I push content out to various offices. I log stdout/stderr on the master server to make sure everything is running smoothly.
I would now like for some of our "regional hubs" to take on some of the load (bandwidth-wise), while still retaining my centralized master for command execution and logging. So far, I simply send the desired rsync command as a remote ssh command. It looks like this: ssh [EMAIL PROTECTED] "rsync -ritzOK --delete-after --timeout=600 --progress --stats --verbose --bwlimit='350' --exclude-from='/home/gsync/.gsync/config/office-filters/sr' --filter='. /home/gsync/.gsync/config/general-filters/junk' --filter=':- gsync-filter' '/s/cad/Revit/' [EMAIL PROTECTED]:'\"/s/cad/Revit\"'" The idea is that I can push to our regional offices, and then have them push to their near-by peers using their own bandwidth, yet still collect output at the master. The only problem I'm running into is this: if I kill the ssh session on the master, the remote rsync process continues to run on the intermediary server to the final destination. This can lead to multiple rsync jobs running from the intermediary to the same destination server, flooding it. I'm having difficulty figuring out what the appropriate mojo is to ensure that the rsync jobs die with the ssh connections. Hope this is clear. Any advice? Thanks, Jimmie -- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html