On 18-02-02 09:55 AM, Jakub Jaszewski wrote:
Hi,
So I have changed merge & split settings to
filestore_merge_threshold = 40
filestore_split_multiple = 8

and restart all OSDs , host by host.

Let me ask a question, although the pool default.rgw.buckets.data that was affected prior to the above change has higher write bandwidth it is very random now. Writes are random for other pools (same for EC and replicated types) too, before the change writes to replicated pools were much more stable.
Reads from pools look fine and stable.

Is it the result of mentioned change ? Is PG directory structure updating or ...?
The HUGE problem with filestore is that it can't handle large number of 
small objects well. Sure, if the number only grows slowly (case with RBD 
images) then it's probably not that noticeable, but in case of 31 millions 
of objects that come and go at random pace you're going to hit frequent 
problems with filestore collections splitting and merging. Pre-Luminous, it 
happened on all osds hosting particular collection at once, and in Luminous 
there's "filestore split rand factor" which according to docs:
Description:  A random factor added to the split threshold to avoid
              too many filestore splits occurring at once. See
              ``filestore split multiple`` for details.
              This can only be changed for an existing osd offline,
              via ceph-objectstore-tool's apply-layout-settings command.

You may want to try the above as well.

--
Piotr Dałek
piotr.da...@corp.ovh.com
https://www.ovh.com/us/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to