FILE NAME WITH SPACES -- this is 4 different space-separated parameters fed
from the shell to the program
"FILE NAME WITH SPACES" -- this is one parameter fed from the shell to the
program
FOLDER-NAME/ -- this is one parameter and means all the files in the
directory FOLDER-NAME/
FOLDER-NAME -- th
Man rsync says that
"If you need to transfer a filename that contains whitespace,
you'll need to either escape the whitespace in a way that the remote
shell will understand, or use wildcards in place of the spaces.".
I am regularly doing backups with rsync and notice that files names w
Use cp -au , repeat if it crashes or if you have to stop (it
will skip files already copied), then rsync to update directories'
modification dates and catch anything changed during copy.
Denis
On Aug 11, 2009 5:37am, Ming Gao wrote:
It's almost the same? I ever tested on about 7G data, I r
I think your problem is with reading the correct size of folders as
there have hard links. To do this with du command, try:
du -sh /home/backup/*
As far as I know, du command will only report the "real" disk size if
the 2 hard links are in the du "scoop". Otherwise, running 2 times du
on th
Maybe I should take up an afternoon coffee habit. I did some reading
on du, and found out that it only disregards a file with multiple hard
links if it has seen it before. Running du -hcd1 on /home/backup
resulted in all expected results.
[r...@arthur /home/backup]# du -hcd1 .
2.0K./hom
And now df is reporting proper usage of 5.4 GiB (which is what I
expected). Maybe I just wasn't being patient enough and there's some
weird df lag or something. Anyway, seems like it's working OK, but if
anyone has any pointers on doing this more efficiently, I'd be more
than happy to hear
Hourly I have an rsync job backup /home to /home/backup. I have 24
directories (one for each hour):
home.0
...
home.23
Here is the script I am running via cron:
#! /usr/local/bin/bash
dest=`date +%k | sed 's/ //g'`
linkdir=`date -v-1H +%k | sed 's/ //g'`
chflags -R noschg /home/backup
rm -rf
Ming Gao wrote:
> The first question is that if there is any risk for such a big number of
> files? should I divide them into groups and rsync them in parallel or in
> serial? If yes, how many groups is better?
For that amount of data, you ought to use something simple and recursive,
like cp -rp.
2009/8/11 Brett Worth :
> Ming Gao wrote:
>> And any other thing I could do to reduce the risk?
>
> Once you get all the file copied over by whatever means then a final rsync
> would be good
> to get all the metadata lined up. Based on your file count I'd strongly
> recommend you
> break up th
Ming Gao wrote:
> I need to migrate 40T data and 180M files from one storage device to
> another one, both source and destination will be NFS and mounted to a
> local suse linux box.
Is there any way you could get local access to the write end of the transfer so
that you
don't have to do this all
2009/8/11 Ming Gao :
> It's almost the same? I ever tested on about 7G data, I rsync'ed it to
> another directory, and it takes less than 1 minute when I run the same
> command line again.
Did you test it on the two NFS shares or something else?
Also if you have enough memory part of the data mig
It's almost the same? I ever tested on about 7G data, I rsync'ed it to
another directory, and it takes less than 1 minute when I run the same
command line again.
The reason why I use rsync is that the data will change during the time I
run rsync the first time. Then I need to run rsync the second
On Tue, 2009-08-11 10:58:15 +0200, Michal Suchanek wrote:
> 2009/8/11 Jan-Benedict Glaw :
> > On Tue, 2009-08-11 16:14:33 +0800, Ming Gao wrote:
> > > I need to migrate 40T data and 180M files from one storage device to
> > > another
> > > one, both source and destination will be NFS and mounted
2009/8/11 Jan-Benedict Glaw :
> On Tue, 2009-08-11 16:14:33 +0800, Ming Gao wrote:
>> I need to migrate 40T data and 180M files from one storage device to another
>> one, both source and destination will be NFS and mounted to a local suse
>> linux box.
>>
>> The first question is that if there is
On Tue, 2009-08-11 16:14:33 +0800, Ming Gao wrote:
> I need to migrate 40T data and 180M files from one storage device to another
> one, both source and destination will be NFS and mounted to a local suse
> linux box.
>
> The first question is that if there is any risk for such a big number of
>
Hi.
Tue, 11 Aug 2009 16:14:33 +0800, gaomingcn wrote:
> The second question is about memory.
> How much memory should I install to the linux box? The rsync FAQ
> (http://rsync.samba.org/FAQ.html#4) says one file will use 100 bytes
> to store relevant information, so 180M files will use about
> 18G
hi,
I need to migrate 40T data and 180M files from one storage device to another
one, both source and destination will be NFS and mounted to a local suse
linux box.
The first question is that if there is any risk for such a big number of
files? should I divide them into groups and rsync them in p
I have found a workaround for this problem - use short module
names, <15 bytes.
I placed the following sequence of module declarations in
/etc/rsyncd.conf
[123456789abcdef0]
path = /dev/null
[123456789abcdef]
path = /dev/null
[123456789abcde]
path = /dev/null
[123456789abcd]
path
18 matches
Mail list logo