hi igor -

On 5/19/20 3:23 PM, Igor Fedotov wrote:

> slow_used_bytes is non-zero hence you have a spillover.

you are absolutely right, we do have spillovers on a large number of
osds. ceph tell osd.* compact is running right now.

> Additionally your DB volume size selection isn't perfect. For optimal
> space usage RocksDB/BlueFS require DB volume sizes to aligned with the
> following sequence (a bit simplified view):
> 
> 3-6GB, 30-60GB, 300+GB. This has been discussed in this mailing list
> multiple times.
> 
> Using DB volume size (15 GB in you case) out of these ranges cause
> wasting of space from one side and early spillovers from another.
> 
> Hence this worths adjusting in long term too.

yes, adding additional nvmes to the cluster is on our to do-list.

thank you,
thoralf.

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to