if none of previously suggested ideas on making that number lower help,
i'd take a look at different fs or check ext2 source. at least i've seen
some doc mentioning that ext2 has pretty serious performance impacts if
you have >20,000 files in one dir.
at work we had similar problem (~40k other directories with unique names,
so fitting them into hierchy was pain) - we just used our concept of
virtual server and stacked several virtuals on one physical box. yup, but
that's another going away from lots of files in one dir.
v
On Tue, 24 Apr 2001, Min Yuan wrote:
> Hello,
>
> We have a directory on Redhat 6.2 with 500, 000 files. In our code we open and read
>the directory and for each entry in the directory we use lstat() to check for some
>information. The whole scanning takes more than eight hours which is terribly long.
>
> Is there any way we could reduce this length of time? If the answer is NO, then is
>there any official documents about it and where can we find it?
>
> Thank you!
>
> Min Yuan
> VytalNet, Inc.
> (905)844-4453 Ext. 241
>
_______________________________________________
Redhat-devel-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-devel-list