On 8/28/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Saturday, August 26, 2006, 6:43:05 PM, you wrote:
WYT> Thanks to all who have responded. I spent 2 weekends working through
WYT> the best practices tthat Jerome recommended -- it's quite a mouthful.
WYT> On 8/17/06, Roch <[EMAIL PROTECTED]> wrote:
>> My general principles are:
>>
>> If you can, to improve you 'Availability' metrics,
>> let ZFS handle one level of redundancy;
WYT> Cool. This is a good way to take advantage of the
WYT> error-detection/correcting feature in ZFS. We will definitely take
WYT> this suggestion!
>> For Random Read performance prefer mirrors over
>> raid-z. If you use raid-z, group together a smallish
>> number of volumes.
>> setup volumes that correspond to small number of
>> drives (smallest you can bear) with a volume
>> interlace that is in the [1M-4M] range.
WYT> I have a hard time picturing this wrt the 6920 storage pool. The
WYT> internal disks in the 6920 presents up to 2 VD per array (6-7 disk
WYT> each?). The storage pool will be built from a bunch of these VD and
WYT> may be futher partitioned into several volumes and each volume is
WYT> presented to a ZFS host. What should the storage profile look like?
WYT> I can probably do a stripe profile since I can leave the redundancy to
WYT> ZFS.
IMHO if you have VD make just one partition and present it as a LUN to
ZFS. Do not present severap partitions from the same disks to ZFS as
different LUN.
I'm a real newbie here as you can probably tell and this is one of the
aspect I'm struggling with. The compromise seems to be between
putting in more spindles without increasing the volume stripe size.
ZFS on simple disks manages itself nicely.
When constructing the VD from the 3510, we will likely stripe across 2-3 disks.
For virtualisation strategy on the 6920, we will probably go with
concat. I do not imagine that striping here will go well with ZFS. I
just have to be careful not to present volumes from the same VDs.
Another alternative will be to just present the VDs directly without
virtualization.
Cost is the primary concern for this project.
WYT> To complicate matters, we are likely going to attach all our 3510 into
WYT> the 6920 and use some of these for the ZFS volumes so futher
WYT> restrictions may apply. Are we better off doing a direct attach?
You can attach 3510 JBODs (I guess) directly - but currently there're
restrictions - only one host and no MPxIO. If it's ok it looks like
you'll get better performance than if going with 3510 head unit.
ps. I did try with MPxIO and two hosts connected, with several JBODs -
and I did see FC loop logoug/login, etc.
I saw your benchmarking results. Great work there.
--
Just me,
Wire ...
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss