rsync folks,
Henri Shustak wrote:
LBackup always starts a new backup snapshot with an empty directory. I
have been looking at extending --link-dest options to scan beyond just
the previous successful backup to (failed backups / older backups).
However, there are all kinds of edge cases which ar
rsync doesnt do that
why not use a range get with http server and wget client, or just ssh
ssh remotehost 'dd if=file bs=500 count=1' > file ?
/kc
On Wed, Apr 15, 2015 at 11:02:36AM +, Hongyi Zhao said:
>Hi all,
>
>Suppose I have a file on the remote rsync server:
>
>rsync://
Hongyi Zhao (Mi 15 Apr 2015 14:58:21 CEST):
> On Wed, 15 Apr 2015 14:39:10 +0200, Heiko Schlittermann wrote:
>
> > Because you didn't tell.
> >
> > … 2>
> >
> > Note the missing space between die file descriptor and the redirection
> > operator.
>
> Then, why the following one will wor
On Wed, 15 Apr 2015 14:39:10 +0200, Heiko Schlittermann wrote:
> Because you didn't tell.
>
> … 2>
>
> Note the missing space between die file descriptor and the redirection
> operator.
Then, why the following one will work, i.e., use a space between the
redirection operator and the lo
Hongyi Zhao (Mi 15 Apr 2015 13:32:23 CEST):
> Hi all,
>
> See the following commands:
>
> werner@debian:~$ rsync -c ftp.cn.debian.org::debian/ 2 >aaa
> rsync: The server is configured to refuse --checksum (-c)
> rsync error: requested action not supported (code 4) at clientserver.c
> (849) [send
Hi all,
See the following commands:
werner@debian:~$ rsync -c ftp.cn.debian.org::debian/ 2 >aaa
rsync: The server is configured to refuse --checksum (-c)
rsync error: requested action not supported (code 4) at clientserver.c
(849) [sender=3.0.9]
rsync: read error: Connection reset by peer (104)
r
Hi all,
Suppose I have a file on the remote rsync server:
rsync://path/to/myfile
And I want to only retrieve a part of the file based a ranges of bytes to
my local host, say, 0-499, means only transfer the first 500 bytes of
that file.
Is this possible with rsync client?
Regards
--
.: Hong
On Wed, 15 Apr 2015 02:48:13 -0400, Kevin Korb wrote:
> Technically no, practically kinda...
> Deleting files only works when you are syncing a directory. If you
> specify every file to copy then you aren't actually syncing anything and
> there is nothing for --delete to do. So, --delete will on
80 million calls isnt 'that bad' since it completes in 5 hours, yes? I suppose
I dont mind. I should throw more ram in the box and figure out how to tune
meta-data caching so its preferred over file data. Then it'd be quicker.
Either way, it's working for me now, and in fact, if the backup server