On Jan 24, 2010, at 8:26 PM, Frank Middleton wrote: > What an entertaining discussion! Hope the following adds to the > entertainment value :). > > Any comments on this Dec. 2005 study on disk failure and error rates? > http://research.microsoft.com/apps/pubs/default.aspx?id=64599 > > Seagate says their 1.5TB consumer grade drives are good for 24*365 > operation. http://labs.google.com/papers/disk_failures.pdf implies yes. > This paper is quite interesting. Power cycles - bad. High temps - not > so bad... > > The specs say an annualized failure rate of 0.34% and mean time between > failures of 750,000 hours. 8760/750,000 = 1.17%. Hmmm. So around one > disk in a hundred will fail each year? What does that mean to a system with > a simple mirror if one disk in 20 will fail in 5 years?
Unfortunately, this is marketeering and you need to look at the footnotes to get the real story. http://blogs.sun.com/relling/entry/awesome_disk_afr_or_is > What is the MTTDL of a mirrored pair of consumer grade 1.5TB drives, > or the probability of a single data loss (say) during a 5 year period, > perhaps compared to the probability (say) of winning the lottery :-), > or being hit by a 20 ton meteor, assuming at least one device failure? MTTDL using model[2] for a Seagate ST31500341AS and 7x24x365 operation: MTBF = 700,000 hours UER = 1 error per 1e14 bits read, max 1.5 TB = 2,930,277,168 512-byte sectors Precon_fail = 2,930,277,168 * 512 bytes/sector * 8 bits/byte / 1e-14 = 0.12 MTTDL = 700,000 / (2 * 0.12) = 2,916,666 hours This is not that great, really. To bring it back to something a bit more understandable, it is an annualized data loss rate of 0.3 %. references for above: http://www.seagate.com/staticfiles/support/disc/manuals/desktop/Barracuda%207200.11/100507013e.pdf > The OP originally asked "Best 1.5TB drives for consumer RAID?". Despite > the entertainment value of the comments, it isn't clear that this has been > answered. I suspect the OP was expecting a discussion of WD vs. Seagate > vs. Hitachi, etc., but the discussion didn't go that way, perhaps because > they are equally good (or bad) based on the TLER discussion? Has anyone > actually experienced an extended timeout from one of these drives (from > any manufacturer) causing a problem? Extended timeouts lead to manual intervention, not a change in the probability of data loss. In other words, they affect the MTTR, not the reliability. For a 7x24x365 deployments, MTTR is a concern because it impacts availability. For home use, perhaps not so much. -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss