> On Tue, Jan 19, 2016 at 11:11:22PM +0000, James Harper wrote:
> > (it seems that "reply-all" no longer includes luv-main (from ms
> > outlook at least), so I have to include it manually... what's with
> > that?)
> 
> who knows? outlook is weird.
> 
> for list replies, it's better to just reply to the list without CC-ing
> everyone anyway. i don't care much either way (i have procmail and i'm
> not afraid to use it :), but some people really dislike getting dupes.

As long as I remember to replace the To: with luv-main each time I reply, I 
guess it's workable.

> > > Of course a RAID-1 of SSDs will massively outperform the RAID-5 you
> > > have.
> >
> > If you use SSDs for any sort of intensive storage, do keep an eye on
> > the SMART "media wearout" values, and replace them before the counter
> > hits 0 (or 1).
> 
> the only related value i can find on 'smartctl -a' on my 256GB OCZ
> Vertex is:
> 
>   233 Remaining_Lifetime_Perc 0x0000   067   067   000    Old_age   Offline   
>    -
> 67

233 is reported as Media Wearout Indicator on the drives I just checked on a 
BSD box, so I guess it's the same thing but with a different description for 
whatever reason.

> 
> I assume that means I´ve used up about 1/3rd of its expected life.  Not
> bad, considering i've been running it for 500 days total so far:
> 
>     9 Power_On_Hours          0x0000   100   100   000    Old_age   Offline   
>    -
> 12005
> 
> 12005 hours is 500 days.  or 1.3 years.

I just checked the server that burned out the disks pretty quick last time 
(RAID1 zfs cache, so both went around the same time), and it has 60% remaining 
after a year or so. As a cache for a fairly large array, it gets a lot of data. 
I don't have the 198 and 199 values you mentioned so I can't tell. I do have a 
"total LBA" read/written, but those are ridiculously low, like a few hundred 
mb,  so are probably 32 bit values that have wrapped a few times.
 
> and that's for an OCZ Vertex, one of the last decent drives OCZ made
> before they started producing crap and went bust (and subsequently got
> bought by Toshiba, who are now producing decent drives again under the
> OCZ brand name).....so relatively old technology compared to modern
> SSDs.

I've seen too many OCZ's fail within months of purchase recently, but not 
enough data points to draw conclusions from. Maybe a bad batch or something? 
They were all purchased within a month or so of each other, late last year. The 
failure mode was that the system just can't see the disk, except very 
occasionally, and then not for long enough to actually boot from.

> 
> according to http://www.anandtech.com/show/8239/update-on-samsung-
> 850-pro-endurance-vnand-die-size
> 
> the 256GB Samsung 850 Pro has an expected lifespan of 70 years with
> 20GB/day writes or 14 years with 100GB/day writes.
> 
> The 512GB model doubles that and the 1TB quadruple it.
> 
> even if you distrust the published specs and regard them as marketing
> dept. lies, and discount them by 50% or even 75%, you're still looking
> at long lives for modern SSDs....more than long enough to last until the
> next upgrade cycle for your servers.
> 

Yep. I just got a 500GB 850 EVO for my laptop and it doesn't have any of the 
wearout indicators that I can see, but I doubt I'll get anywhere near close to 
wearing it out before it becomes obsolete.

> 
> So, yes, keep an eye on the "Remaining_Lifetime_Percentage" or "Wear
> Level Count" or whatever the SMART attribute is called on your
> particular SSD, but there's no need to worry too much about it unless
> you're writing 1TB/day or so (and even then it should last around 3.5
> years).
> 
> > I'm seeing time-to-replacement of about 12 months on high load
> > system where the SSD's are used for a RAID cache (ZFS, Intel RAID
> > controllers, etc).
> 
> 12 months?  how much are you writing to those things each day?
> 

Lots and lots, obviously :)

These ones were cache on an intel RAID controller, so they really got hammered. 
It's also possible that they weren't really the right model of SSD for what we 
used them for.

James

_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to