I have direct experience with SATA SSDs used for RBD with an active public 
cloud (QEMU/KVM) workload.  Drives rated ~ 1 DWPD after 3+ years of service 
consistently reported <10% of lifetime used.

SMART lifetime counters are often (always?) based on rated PE cycles, which I 
would expect to be more or less linear over the drive’s lifetime.

There’s a lot of FUD around endurance.  One sees very few if any actual cases 
of production drives running out, especially when they are legit “enterprise” 
class models.  Some drives report lifetime USED; some report lifetime 
REMAINING.  Sometimes the smartmontools drive.db entries get the polarity 
wrong.  Trending lifetime and reallocation blocks used vs remaining over time 
can be very illuminating, especially when certain models may exhibit (ahem) 
firmware deficiencies.

It is common to depreciate server gear over 5 years (at least in the US).  Mind 
you, depreciation is one thing, and CapEx approval for refresh is quite 
another, but I would expect chassis to experience more failures over time and 
issues with replacement parts availability than the drives themselves.

ymmocv

— aad

> 
> Yes, we were a little bit concerned about the write endurence of those 
> drives. There are SSD with much higher DWPD endurance, but we expected that 
> we would not need the higher endurance. So we decided not to pay the extra 
> price.
> 
> Turns out to have been a good guess. (educated guess, but still)
> 
> MJ
> 
> Op 22-11-2021 om 16:25 schreef Luke Hall:
>>> 
>>> They seem to work quite nicely, and their wearout (after one year) is still 
>>> at 1% for our use.
>> Thanks, that's really useful to know.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to