Hello Wee, Saturday, August 26, 2006, 6:43:05 PM, you wrote:
WYT> Thanks to all who have responded. I spent 2 weekends working through WYT> the best practices tthat Jerome recommended -- it's quite a mouthful. WYT> On 8/17/06, Roch <[EMAIL PROTECTED]> wrote: >> My general principles are: >> >> If you can, to improve you 'Availability' metrics, >> let ZFS handle one level of redundancy; WYT> Cool. This is a good way to take advantage of the WYT> error-detection/correcting feature in ZFS. We will definitely take WYT> this suggestion! >> For Random Read performance prefer mirrors over >> raid-z. If you use raid-z, group together a smallish >> number of volumes. >> setup volumes that correspond to small number of >> drives (smallest you can bear) with a volume >> interlace that is in the [1M-4M] range. WYT> I have a hard time picturing this wrt the 6920 storage pool. The WYT> internal disks in the 6920 presents up to 2 VD per array (6-7 disk WYT> each?). The storage pool will be built from a bunch of these VD and WYT> may be futher partitioned into several volumes and each volume is WYT> presented to a ZFS host. What should the storage profile look like? WYT> I can probably do a stripe profile since I can leave the redundancy to WYT> ZFS. IMHO if you have VD make just one partition and present it as a LUN to ZFS. Do not present severap partitions from the same disks to ZFS as different LUN. WYT> To complicate matters, we are likely going to attach all our 3510 into WYT> the 6920 and use some of these for the ZFS volumes so futher WYT> restrictions may apply. Are we better off doing a direct attach? You can attach 3510 JBODs (I guess) directly - but currently there're restrictions - only one host and no MPxIO. If it's ok it looks like you'll get better performance than if going with 3510 head unit. ps. I did try with MPxIO and two hosts connected, with several JBODs - and I did see FC loop logoug/login, etc. -- Best regards, Robert mailto:[EMAIL PROTECTED] http://milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss