Hello grant, Wednesday, May 31, 2006, 4:11:09 AM, you wrote:
gb> hi all, gb> I am hoping to move roughly 1TB of maildir format email to ZFS, but gb> I am unsure of what the most appropriate disk configuration on a 3510 gb> would be. gb> based on the desired level of redundancy and usable space, my thought gb> was to create a pool consisting of 2x RAID-Z vdevs (either double gb> parity, or single parity with two hot-spares). using 300GB drives gb> this would give roughly 2.4TB of usable space. gb> I am presuming I will want the RAID module purely for the additional gb> caching, and create a single LUN for each disk and present those to gb> ZFS. the disk will most likely be directly attached to an gb> X4100/X4200 using MPxIO, exported via NFS (some Linux NFSv3 clients, gb> some Solaris 9, 10 and Nevada). gb> I would like to get a feel for what others are doing in similar gb> configurations. is the 3510 RAID module cache effective in such a gb> configuration? I wasn't able to find any definitive answer to this in gb> the documentation. RAID module or no RAID module? is it worth the gb> extra cost? gb> any insight other ZFS users could provide would be appreciated. IMHO you you want to just map disk->LUN it doesn't make sense to buy raid controllers at all. I do have a config in which I have raid-5 done in HW raid and then raid-z from such devices - that way despite of lack of raidz2 you get better redundancy - but write performance isn't stellar. I also have a config with 3510 JBODs directly connected to host using two links with MPxIO and raidz - works almost ok. By almost I mean it works but I can see much more IOs to disks that I should - see 'BIG IOs overhead due to ZFS' thread. All the rest it works ok. -- Best regards, Robert mailto:[EMAIL PROTECTED] http://milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss