On Feb 6, 2011, at 6:45 PM, Matthew Angelo wrote: > I require a new high capacity 8 disk zpool. The disks I will be > purchasing (Samsung or Hitachi) have an Error Rate (non-recoverable, > bits read) of 1 in 10^14 and will be 2TB. I'm staying clear of WD > because they have the new 2048b sectors which don't play nice with ZFS > at the moment. > > My question is, how do I determine which of the following zpool and > vdev configuration I should run to maximize space whilst mitigating > rebuild failure risk?
The MTTDL[2] model will work. http://blogs.sun.com/relling/entry/a_story_of_two_mttdl As described, this model doesn't scale well for N > 3 or 4, but it will get you in the ballpark. You will also need to know the MTBF from the data sheet, but if you don't have that info, that is ok because you are asking the right question: given a single drive type, what is the best configuration for preventing data loss. Finally, to calculate the raidz2 result, you need to know the mean time to recovery (MTTR) which includes the logistical replacement time and resilver time. Basically, the model calculates the probability of a data loss event during reconstruction. This is different for ZFS and most other LVMs because ZFS will only resilver data and the total data <= disk size. > > 1. 2x RAIDZ(3+1) vdev > 2. 1x RAIDZ(7+1) vdev > 3. 1x RAIDZ2(7+1) vdev > > > I just want to prove I shouldn't run a plain old RAID5 (RAIDZ) with 8x > 2TB disks. Double parity will win over single parity. Intuitively, when you add parity you multiply by the MTBF. When you add disks to a set, you change the denominator by a few digits. Obviously multiplication is a good thing, dividing not so much. In short, raidz2 is the better choice. -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss