Try wrapping your rsync around script thats has some sort of lock file mechanism
[ -f /var/tmp/rsync_is_running ] && exit 0 [ -f /var/tmp/rsync_is_running ] || touch /var/tmp/rsync_is_running && rsync -avz --delete rsyncsrv1::ftp/ /export/ftp/ or something like that. CAD/SysAdmin Manager wrote: > > Hi, > > I have setup a cron job to rsync files between two servers every hour. > > 30 * * * * rsync -avz --delete rsyncsrv1::ftp/ /export/ftp/ > > This works fine most of the time, but if I have a very large file that > needs to be transfered ( many tens of MBs), I run into a problem. Since the > link between my two servers is slow, it can take more than an hour to > complete the rsync transfer. So at the end of the hour, when the next rsync > job is started by cron, the big file transfer gets aborted and a new > transfer is started. Due to this the big file transfer never gets > completed. > > I have tried to increase the time between successive cron jobs, but that > is only a temporary fix untill I run into an even bigger file which causes > the new settings to fail. Is there a way I can control this behaviour, and > avoid this looping ? > > Thanks & regards, > > -- > Derric Lewis > CAD/System Administrator > Virtual IP Group, Inc. > > -- To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html