Hi,

the problem comes from older ceph releases. In our case, hdd iops were benchmarked in the range of 250 to 4000, which clearly makes no sense. At osd startup, the benchmark is skipped if that value is already in ceph config, so these initial benchmark values were never changed. To reset them, all osd.N osd_mclock_max_capacity_iops_hdd values should be removed and osds restarted. There is a safety mechanism (osd_mclock_iops_capacity_threshold_hdd) which prevents for the values to be overestimated.

Best,
Andrej

On 19. 09. 24 11:33, Daniel Schreiber wrote:
Hi Denis,

we observed the same behaviour here. The cause was that the number of iops discovered at OSD startup was way too high. In our setup the rocksdb is on flash.

When I set osd_mclock_max_capacity_iops_hdd to a value that the HDDs could handle, the situation was resolved, clients got ther fair share of IO.

Hope this helps,

Daniel

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--
_____________________________________________________________
   prof. dr. Andrej Filipcic,   E-mail: andrej.filip...@ijs.si
   Department of Experimental High Energy Physics - F9
   Jozef Stefan Institute, Jamova 39, P.o.Box 3000
   SI-1001 Ljubljana, Slovenia
   Tel.: +386-1-477-3674    Fax: +386-1-477-3166
-------------------------------------------------------------
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to