Anthony,

Good point, I had this in mind originally but what I have seen since we updated to mclock a few years ago (we have 2 other Ceph cluster) is that the way it is handled currently leads to underperforming clusters because of too low value for iops capacity. And in my experience so far it has never been correlated to drives that start to misbehave. After removing the value set for an osd, it will not be set again and that should be the case if it was related to the drive.

What would be great would be to have the possibility of an "advisory mode" where you could get the information computed by Ceph without enforcing it in mclock IO scheduling...

Best regards.

Michel
Sent from my mobile
Le 31 mars 2025 18:14:49 "Anthony D'Atri" <anthony.da...@gmail.com> a écrit :


Thanks for your feedback. We may go this way as I've the feeling it has no sense to have a value specific to a drive, independly of its model.

I disagree, I think this could - once stabilized - grow into detection of latent or subclinical drive issues. That unlike the SMART pass/fail attribute could be actually useful.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to