I just ran this again and got this error: leelab/NCBI_Data_old/GenBank/htg write failed on leelab/NCBI_Data_old/GenBank/htg : Error 0 rsync error: error in file IO (code 11) at receiver.c(243)
Received signal 16. (no core) rsync error: received SIGUSR1 or SIGINT (code 20) at rsync.c(229) The command I am running is: /usr/local/bin/rsync -auv --delete --rsh=/usr/bin/ssh lpgfs104:/share/group/* /share/group/ > An update on this problem... I get the error below (and the error I > reported previously) when running rsync 2.5.2 compiled from > source. I saw > different behavior when I used the rsync 2.5.2 binary > compiled on Solaris > 2.5.1 by Dave Dykstra. That binary complained of "Value too large for > defined data type" whenever it encountered a large file (over > 2GB), but did > not exit. The impression I got was that the Solaris 2.5.1 > binary did not > support or even try to support files over 2 GB, where the > binary compiled on > Solaris 7 or 8 *thinks* it can support large files but fails, > since it exits > as soon as it encounters the large file. > > So the problem still remains: rsync is dying when it > encounters a large > file. One person suggested using --exclude, but this only > matches against > file names, not file sizes. (I can't do "--exclude=size>2GB" > for example.) > > Questions I still have: > > - Is rsync supposed to support files >2GB on Solaris 7 and Solaris 8? > > - If so, what is causing the errors I am seeing? Is there > something I can > do at compile time? > > - If not, is there a way for it to skip large files > gracefully so that at > least the rsync process completes? > > leelab/NCBI_Data_old/GenBank/htg > write failed on leelab/NCBI_Data_old/GenBank/htg : Error 0 > rsync error: error in file IO (code 11) at receiver.c(243) > > Received signal 16. (no core) > rsync: connection unexpectedly closed (23123514 bytes read so far) > rsync error: error in rsync protocol data stream (code 12) at > io.c(140) > >