--- On Tue, 14/4/09, Frank Bonnet <[email protected]> wrote:

> From: Frank Bonnet <[email protected]>
> Subject: Re: massive copy
> To: [email protected]
> Cc: "Debian User List" <[email protected]>
> Date: Tuesday, 14 April, 2009, 10:02 AM
> Glyn Astill wrote:
> > --- On Tue, 14/4/09, Frank Bonnet
> <[email protected]> wrote:
> > 
> >> From: Frank Bonnet <[email protected]>
> >> Subject: massive copy
> >> To: "Debian User List"
> <[email protected]>
> >> Date: Tuesday, 14 April, 2009, 9:14 AM
> >> Hello
> >>
> >> I have to copy around 250 Gb from a server to a
> Netapp NFS
> >> server
> >> and I wonder what would be faster ?
> >>
> >> first solution
> >>
> >> cp -pr * /mnt/nfs/dir/
> >>
> >> second solution ( 26 cp processes running in // )
> >>
> >>
> >> for i in a b c d e f g h i j k l m n o p q r s t u
> v w x y
> >> z
> >> do
> >> cp -pr $i* /mnt/nfs/dir/ &
> >> done
> >>
> > 
> > Perhpas you could try some sort of tar pipe if
> you've got a nice cpu?
> > 
> > tar cf - * | (cd /mnt/nfs/dir/ ; tar xf - )
> > 
> 
> Yes the machine has nice CPUs and a lot of RAM
> do you think it will be faster using tar rather than cp ?
> 

I'd like to think it would help, if the files are quite compressable perhaps 
you could add a 'z' in there too...





--
To UNSUBSCRIBE, email to [email protected]
with a subject of "unsubscribe". Trouble? Contact [email protected]

Reply via email to