Just an anecdotal answer from me...

You want as few files as possible. I wouldn't go beyond a few hundred files in 
a dir.
Seeing ~1s for each 1,000 files when I "ls".

But this is in a pretty idle directory. When there were files actively being 
written to those dirs and being read, just doing "ls" on the directories was 
very very slow (in the order of minutes)

I have a single MDS setup with cephfs metadata on SSDs. MDS cache at 20GB, 6 
million inodes, approaching 10k reqs/s.

$ for i in `ls --color=none | head -50 | tail -10`; do echo; echo -n "file 
count in dir: "; time ls $i | wc -l; done

file count in dir: 4354

real    0m4.129s
user    0m0.029s
sys     0m0.179s

file count in dir: 3064

real    0m2.847s
user    0m0.027s
sys     0m0.127s

file count in dir: 1770

real    0m1.658s
user    0m0.026s
sys     0m0.075s
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to