I've been having good luck with Samsung "green" 1.5TB drives. I have had 1 DOA, 
but I currently have 10 of them, so that's not so bad. In that size purchase, 
I've had one bad from just about any manufacturer. I've avoided WD for RAID 
because of the error handling stuff kicking drives out of arrays. I don't know 
if that's currently an issue though. And with Seagate's recent record, I didn't 
feel confident in their larger drives. I was concerned about the 5400RPM speed 
being a problem, but I can read over 100MB/s from the array, and 95% on my use 
is over a gigabit LAN, so they are more than fast enough for my needs. 

I just set up a new array with them, 6 in raidz2. The replacement time is high 
enough that I decided the extra parity was worth the cost, even for a home 
server. I need 2 more drives, then I'll migrate my other 4 from the older array 
over as well into another 6 drive raidz2 and add it to the pool. 

I have decided to treat HDDs as completely untrustworthy. So when I get new 
drives I test them by creating a temporary pool in a mirror config and filling 
the drives up by copying data from the primary array. Then do a scrub. When 
it's done, if you get no errors, and no other errors in dmesg, then wait a week 
or so and do another scrub test. I found a bad SATA hotswap backplane and a bad 
drive this way. There are probably faster ways, but this works for me.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to