Hi Chris,
Thanks for your feedback. We may go this way as I've the feeling it has
no sense to have a value specific to a drive, independly of its model.
And we were discussing upgrading to 19.2.1, so good to know that you
have a good experience (We are currently running 18.2.2). Apart from
this problem from time to time with the iops value, we have also a very
good experience with mclock balanced profile.
Cheers,
Michel
Le 31/03/2025 à 12:01, Chris Palmer a écrit :
I use the following method, and haven't ever noticed it being
interfered with:
* set osd_mclock_skip_benchmark to true
* remove any iops values that may have been added
* run fio benchmarks on each model of disk
* run a script that, for each osd, interrogates the drive model of the
block device using smartctl and sets the iops value from my benchmark
Running with the mclock profile set to balanced works really well for
us (on squid 19.2.1, which performs better for us than reef did).
Chris
On 31/03/2025 09:07, Michel Jouvin wrote:
Hi,
Looking at our configuration during a recovery operation (misplaced
objects after adding PG in a pool) that I found a bit slow, I saw
that ~30 OSD over 500 had an entry for
osd_mclock_max_capacity_iops_hdd in the configuration DB (and almost
all the SSD). Despite all HDD and SSD being the same models (and the
server HW being the same). As this value can have a significant
impact on recovery time/speed with mclock, I was wondering if this
value is periodically refreshed, in case one measurement was wrong
(perturbated by external factors). The documentation is not very
clear about this and seems to say that the bench used to estimate the
IOPS capacity is run only when the OSD is activated (which may make
sense as it is before there is any other activity on it)? I'm also
wondering why we observe value like 350 on some OSD as the HW is the
same on all servers?
Currently, I tend to remove all the osd_mclock_max_capacity_iops_xxx
entries from the DB periodically but it's probably not the right
approach...
Cheers,
Michel
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io