Upon reading more into that study it seems the Wikipedia editor has derived
a distorted conclusion:
In our data sets, the replacement rates of SATA disks are not worse than
the replacement rates of SCSI or FC disks. This may indicate that
disk-independent factors, such as operating conditions, usage and
environmental factors, affect replacement rates more than component
specific factors. However, the only evidence we have of a bad batch of
disks was found in a collection of SATA disks experiencing high media
error rates. We have too little data on bad batches to estimate the
relative frequency of bad batches by type of disk, although there is
plenty of anecdotal evidence that bad batches are not unique to SATA
disks.
-- the USENIX article
Apparently, the distinction made between "consumer" and "enterprise" is
actually between technology classes, i.e. SCSI/Fibre Channel vs. SATA,
rather than between manufacturers' gradings, e.g. Seagate 7200 desktop
series vs. Western Digital RE3/RE4 enterprise drives.
All SATA drives listed have MTTF (== MTBF?) of > 1.0 million hours which is
characteristic of enterprise drives as Erik Quanstrom pointed out earlier
on this thread. The 7200s have an MTBF of around 0.75 million hours in
contrast to RE4s with > 1.0-million-hour MTBF.
--On Tuesday, September 22, 2009 00:35 +0100 Eris Discordia
<eris.discor...@gmail.com> wrote:
What I haven't found is a decent, no frills, sata/e-sata enclosure for a
home system.
Depending on where you are, where you can purchase from, and how much you
want to pay you may be able to get yourself ICY DOCK or Chieftec
enclosures that fit the description. ICY DOCK's 5-bay enclosure seemed a
fine choice to me although somewhat expensive (slightly over 190 USD, I
seem to remember).
-------------------------------------------------------------------------
-------
Related to the subject of drive reliability:
A common misconception is that "server-grade" drives fail less frequently
than consumer-grade drives. Two different, independent studies, by
Carnegie Mellon University and Google, have shown that failure rates are
largely independent of the supposed "grade" of the drive.
-- <http://en.wikipedia.org/wiki/RAID>
The paragraph cites this as its source:
--
<http://searchstorage.techtarget.com/magazineFeature/0,296894,sid5_gci125
9075,00.html>
(full text available only to registered users; registration is free,
which begs the question of why they've decided to pester penniless
readers with questions their "corporation's" number of employees and IT
expenses)
which has derived its content from this study:
<http://www.usenix.org/events/fast07/tech/schroeder/schroeder_html/index.
html>
I couldn't find the other study, "independent" from this first.
--On Monday, September 21, 2009 15:07 -0700 Bakul Shah
<bakul+pl...@bitblocks.com> wrote:
On Mon, 21 Sep 2009 16:30:25 EDT erik quanstrom <quans...@quanstro.net>
wrote:
> > i think the lesson here is don't by cheep drives; if you
> > have enterprise drives at 1e-15 error rate, the fail rate
> > will be 0.8%. of course if you don't have a raid, the fail
> > rate is 100%.
> >
> > if that's not acceptable, then use raid 6.
>
> Hopefully Raid 6 or zfs's raidz2 works well enough with cheap
> drives!
don't hope. do the calculations. or simulate it.
The "hopefully" part was due to power supplies, fans, mobos.
I can't get hold of their reliability data (not that I have
tried very hard). Ignoring that, raidz2 (+ venti) is good
enough for my use.
this is a pain in the neck as it's a function of ber,
mtbf, rebuild window and number of drives.
i found that not having a hot spare can increase
your chances of a double failure by an order of
magnitude. the birthday paradox never ceases to
amaze.
I plan to replace one disk every 6 to 9 months or so. In a
3+2 raidz2 array disks will be swapped out in 2.5 to 3.75
years in the worst case. What I haven't found is a decent,
no frills, sata/e-sata enclosure for a home system.