Interesting discussion. I know the bias here is generally toward enterprise 
users. I was wondering if the same recommendations hold for home users that are 
generally more price sensitive. I'm currently running OpenSolaris on a system 
with 12 drives. I had split them into 3 sets of 4 raidz1 arrays. This made some 
sense at the time as I can upgrade 4 disks at a time as new sizes come out. 
However, with 8 of the disks currently being 1.5TB, I'm getting concerned about 
this strategy. While important data is backed up, a loss of the server data 
would be very irritating. 

My next thought was to get more drives and run a single raidz3 vdev with 
12x1.5TB. More space than I need for quite a while, since I can't add just a 
few drives, triple parity for protection. I'd need a few extra drives to hold 
the data while I rebuild the main array, so I'd have cold-spares available that 
I would use for backing up critical data from the server. So they would see use 
and scrubs, not just sitting on the shelf. Access is over a gigE network, so I 
don't need more performance than that. I have read that the overall speed of a 
vdev is approximately the speed of a single device in the vdev, and in this 
case that is more than fast enough. I'm curious what the experts here think of 
this new plan. I'm pretty sure I know what you all think of the old one. :) 

Do you recommend swapping spare drives into the array periodically? It seems 
like it wouldn't really be any better than running scrub over the same period, 
but I've heard of people doing it on hardware raid controllers.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to