More important than being able to push those settings or further is
probably the ability to actually split your subfolders. I've been using
variants of this [1] script I created a while back to take care of that.

To answer your question, we do run with much larger settings than you're
using. 128/-16. The negative prevents subfolder merging while still
allowing the value to be used for calculations the splitting number.

Take a look at the script. It stops the osds, aggressively sets the
subfolder settings, splits the subfolders to that setting, puts your
settings back, and starts your osds. I do this about once a month for our
use case of growing data.


[1] https://gist.github.com/drakonstein/cb76c7696e65522ab0e699b7ea1ab1c4

On Wed, Aug 22, 2018, 7:37 AM Rafael Lopez <rafael.lo...@monash.edu> wrote:

> Hi all,
>
> For those still using filestore and running clusters with a large number
> of objects, I am seeking some thoughts on increasing the filestore split
> settings. Currently we have:
>
> filestore merge threshold = 70
> filestore split multiple = 20
>
> Has anyone gone higher than this?
>
> We are hitting the threshold of 22400 files per dir on osds for a
> particular pool and experiencing slow reqs, and other osd badness as a
> result. I am wondering if we can simply increase these values to push the
> files per dir threshold and delay splitting without major consequences, eg.
> to 80/30. This would probably buy us enough time to move to bluestore.
>
> --
> *Rafael Lopez*
> Research Devops Engineer
> Monash University eResearch Centre
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to