smartctl can very much read sas drives so I would look into that chain first.   
Are they behind a raid controller that is masking the smart commands?

As for monitoring, we run the smartd service to keep an eye on drives.   More 
often than not I notice weird things with ceph long before smart throws an 
actual error.  Bouncing drives, oddly high latency on our "Max OSD Apply 
Latency" graph.   Every few months I throw a smart long test at the whole 
cluster and a few days later go back and rake the results.   Anything that has 
a failure gets immediately removed from ceph by me regardless if smart says 
it's fine or not.   At least 90% of the drives we RMA have smart passed but 
failures in the read test.  Never had pushback from WDC or Seagate on it.

-paul

________________________________________
From: Marc <m...@f1-outsourcing.eu>
Sent: Thursday, October 13, 2022 4:44 PM
To: ceph-users
Subject: [ceph-users] monitoring drives

I was wondering what is a best practice for monitoring drives. I am 
transitioning from sata to sas drives which have less smartctl information not 
even power on hours.

eg. is ceph registering somewhere when an osd has been created?

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to