Yes, I've seen this problem quite frequently as of late, running v13.2.10 MDS.
It seems to be dependent on the client behavior - a lot of xlock contention on
some directory, although it's hard to pin down which client is doing what. The
only remedy was to fail over the MDS.
1k - 4k clients
2M r
Just an anecdotal answer from me...
You want as few files as possible. I wouldn't go beyond a few hundred files in
a dir.
Seeing ~1s for each 1,000 files when I "ls".
But this is in a pretty idle directory. When there were files actively being
written to those dirs and being read, just doing "l
For those who responded to me directly with some helpful tips, thank you!
I thought I'd answer my own question here, since it might be useful to others.
I actually did not find useful examples, but maybe I was not looking for the
right things...
First off, s3cmd kept giving me HTTP 405 errors.
Thanks for the heads up!
Hoping to try it the upgrade from Mimic to Nautilus in the next couple of
months... (fingers crossed).
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io