On Sat, October 3, 2009 20:50, Jeff Haferman wrote:
> And why does an rsync take so much
> longer on these directories when directories that contain hundreds of
> gigabytes transfer much faster?
Rsync protocol has to exchange information about each file between client
and server, as part of the p
That section doesn't actually prescribe one size, so what size did you
choose and how exactly did you set it?
You haven't told us, neither has anyone asked you about the basic
system config. For starters, what CPU, memory and storage? What other
stuff is this machine doing?
Also we do rea
Rob Logan wrote:
>
> >> Directory "1" takes between 5-10 minutes for the same command to
> return
> >> (it has about 50,000 files).
>
> > That said, directories with 50K files list quite quickly here.
>
> a directory with 52,705 files lists in half a second here
>
> 36 % time \ls -1 > /dev/n
>> Directory "1" takes between 5-10 minutes for the same command to
return
>> (it has about 50,000 files).
> That said, directories with 50K files list quite quickly here.
a directory with 52,705 files lists in half a second here
36 % time \ls -1 > /dev/null
0.41u 0.07s 0:00.50 96.0%
perh
On Sat, 3 Oct 2009, Jeff Haferman wrote:
When I go into directory "0", it takes about a minute for an "ls -1 |
grep wc" to return (it has about 12,000 files). Directory "1" takes
between 5-10 minutes for the same command to return (it has about 50,000
files).
This seems kind of slow. In the
On Sat, Oct 3, 2009 at 6:50 PM, Jeff Haferman wrote:
>
> A user has 5 directories, each has tens of thousands of files, the
> largest directory has over a million files. The files themselves are
> not very large, here is an "ls -lh" on the directories:
> [these are all ZFS-based]
>
> [r...@cluste
+--
| On 2009-10-03 18:50:58, Jeff Haferman wrote:
|
| I did an rsync of this directory structure to another filesystem
| [lustre-based, FWIW] and it took about 24 hours to complete. We have
| done rsyncs on other directo
Jeff Haferman wrote:
A user has 5 directories, each has tens of thousands of files, the
largest directory has over a million files. The files themselves are
not very large, here is an "ls -lh" on the directories:
[these are all ZFS-based]
[r...@cluster]# ls -lh
total 341M
drwxr-xr-x+ 2 someone
A user has 5 directories, each has tens of thousands of files, the
largest directory has over a million files. The files themselves are
not very large, here is an "ls -lh" on the directories:
[these are all ZFS-based]
[r...@cluster]# ls -lh
total 341M
drwxr-xr-x+ 2 someone cluster 13K Sep 14 19