Dearl Neal wrote:
I have been testing the performance of zfs vs. ufs using filebench. The setup is a v240, 4GB RAM, 2...@1503mhz, 1 320GB _SAN_ attached LUN, and using a ZFS mirrored root disk. Our SAN is a top notch NVRAM based SAN. There are lots of discussions using ZFS with SAN based storage.. and it seems ZFS is designed to perform best with dumb disk (JBODs). The test I ran support this observation.. and no matter what kernel tunables I make the zfs_params, I just can't seem to get the performance from ZFS that I can get out of UFS under the Solaris Volume Manager (SVM). I am using the single LUN test because it performed better than any striping configuration I came up with. We don't use any software RAID of any kind.. because the SAN does it all for us. One interesting test revealed better performance using the SMI label on our LUNs than that EFI label. This is true for using the fileserver, large_db_oltp_8k_uncached, and large_db_oltp_8k_cached workloads from filebench. The fi
le
server differences were not that great, but he db workloads performed 4x better using the SMI labeled LUNs as opposed to the EFI labeled LUNs. Does anyone know why ZFS would perform better with the SMI labeled luns that the EFI labeled luns? Is this the way it suppose to be? Thanks.
You need to check the block offset for the start of the EFI partitions. Currently those are set to 256, but in older systems they were 34.
In the case of a RAID array (nit: which runs software RAID) you could be straddling the columns, which would make a negative impact for a hard-disk-based array. This would not be noticed on a JBOD because there won't be any columns to straddle. -- richard _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org