> Hi, > Hi tonmaus :) (btw, isn't that German for Audio Mouse?)
> the corners I am basing my previous idea on you can > find here: > http://www.solarisinternals.com/wiki/index.php/ZFS_Bes > t_Practices_Guide#RAIDZ_Configuration_Requirements_and > _Recommendations Yep, me too :) > I can confirm some of the recommendations already > from personal practise. First and foremost this > sentence: "The recommended number of disks per group > is between 3 and 9. If you have more disks, use > multiple groups." > One example: > I am running 11+1 disks in a single group now. I have > recently changed the configuration from raidz to > raidz2, and the performance while scrub dropped from > 500 MB/s to app. 200 MB/s by the imposition of the > second parity. I am sure that if I had chosen two > groups in raidz, the performance would have been even > better than the original config while I could still > loose two drives in the pool unless the loss wouldn't > occur within a single group. Except that that "if" is the one that effectively brings down the pool to single-parity redundancy. You'd be counting on luck that that second disk wouldn't fail in the same vdev. And we've probably all heard ever-so often that the second disk that fails often fails in the same array once you add a replacement disk to the degraded array and start rebuilding. So actually, the odds would count against you being lucky. I would've sticked with the larger configuration like you, too :) > The bottom line is that while increasing the number > of stripes in a group the performance, especially > random I/O, will converge against the performance of > a single group member. But in neither case would this apply to my two options. The stripe/device count would be the same for the vdev or vdevs. In option two, the vdev count would be quadrupled, but the device count would be the same. (FYI, the actual raw device count I'm trying to assemble is 2x2TB, 6x1TB, 8x500GB, in both my options resulting in 7 members for the RAIDZ-2 vdev(s)). > The only reason why I am sticking with the single > group configuration myself is that performance is > "good enough" for what I am doing for now, and that > "11 is not so far from 9". > This is also why I don't mind deviating a bit from the best practices. Performance is less important to me than effective storage space which again is less important than security through redundancy. > In your case, there are two other aspects: > - if you pool small devices as JBODS below a vdev > member, no superordinate parity will help you when > you loose a member of the underlying JBOD. The whole > pool will just be broken, and you will loose a good > part of your data. No, that's not correct. The first option of pooling smaller disks into larger, logical devices via SVM would allow me to theoretically lose up to [b]eight[/b] disks while still having a live zpool (in the case where I lose 2 logical devices comprised of four 500GB drives each; this would only kill two actual RAIDZ2 members). Using slices, I'd be able to lose up to [b]five[/b] disks (in the case where I'd lose one 2TB disk (affecting all four vdevs) and four 500GB disks, one from each vdev). I'd have to be extremely "lucky", though, for any of these scenarios to actually play out ;) But in any case, both of my options, redundancy-wise, are in the worst-case scenario always [b]at least[/b] as robust as distributing the RAIDZ2 vdevs over similar disks, while potentially being even more robust. > - If you use slices as vdev members, performance will > drop dramatically. > And this is what I'm asking. [b]Aside[/b] from the issue with ZFS not being able to utilize the sliced drives' caches. Because performance is priority 3 for me. But if head thrashing occurs, the slice-n-dice method is clearly not the way ahead for me ;) So I'm still very open to any knowledge on this particular question. > I can't see that raidz2 would be a good choice unless > on the group layer, and raidz is probably good enough > with comparably small disks and pool size. > On the other side I am very curious what your > findings are trying what you have in mind... :-) > I'm already planning to do a blog post on this once I'm done :) It'll even include pictures of my modded computer case (just drilled 221 air holes in the front the other day ;) ). Cheers, Daniel :) -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss