Andy Lubel wrote: > With my (COTS) LSI 1068 and 1078 based controllers I get consistently > better performance when I export all disks as jbod (MegaCli - > CfgEachDskRaid0). > > Is that really 'all disks as JBOD'? or is it 'each disk as a single drive RAID0'?
It may not sound different on the surface, but I asked in another thread and others confirmed, that if your RAID card has a battery backed cache giving ZFS many single drive RAID0's is much better than JBOD (using the 'nocacheflush' option may even improve it more.) My understanding is that it's kind of like the best of both worlds. You get the higher number of spindles and vdevs for ZFS to manage, ZFS gets to do the redundancy, and the the HW RAID Cache gives virtually instant acknowledgement of writes, so that ZFS can be on it's way. So I think many RAID0's is not always the same as JBOD. That's not to say that even True JBOD doesn't still have an advantage over HW RAID. I don't know that for sure. But I think there is a use for HW RAID in ZFS configs which wasn't always the theory I've heard. > I have really learned not to do it this way with raidz and raidz2: > > #zpool create pool2 raidz c3t8d0 c3t9d0 c3t10d0 c3t11d0 c3t12d0 > c3t13d0 c3t14d0 c3t15d0 > Why? I know creating raidz's with more than 9-12 devices, but that doesn't cross that threshold. Is there a reason you'd split 8 disks up into 2 groups of 4? What experience led you to this? (Just so I don't have to repeat it. ;) ) -Kyle _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss