I don't recall any additional tuning to be applied to new DB volume. And assume the hardware is pretty the same...

Do you still have any significant amount of data spilled over for these updated OSDs? If not I don't have any valid explanation for the phenomena.


You might want to try "ceph osd bench" to compare OSDs under pretty the same load. Any difference observed


On 4/23/2020 8:35 AM, Stefan Priebe - Profihost AG wrote:
Hello,

is there anything else needed beside running:
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-${OSD} bluefs-bdev-new-db --dev-target /dev/vgroup/lvdb-1

I did so some weeks ago and currently i'm seeing that all osds originally deployed with --block-db show 10-20% I/O waits while all those got converted using ceph-bluestore-tool show 80-100% I/O waits.

Also is there some tuning available to use more of the SSD? The SSD (block-db) is only saturated at 0-2%.

Greets,
Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to