Bart Van Assche wrote: >> If I understand this correctly, you've stripped the disks together >> w/ Linux lvm, then exported a single ISCSI volume to ZFS (or two for >> mirroring; which isn't clear). >> > > The disks in the SAN servers were indeed striped together with Linux LVM and > exported as a single volume to ZFS. The ZFS pool was set up as follows: > > $ zpool create -f storagepoola raidz1 c4t8d0 c4t6d0 c4t12d0 c4t10d0 >
Please see the discussions on raidz performance in the Best Practices Guide. > $ zpool get all storagepoola > NAME PROPERTY VALUE SOURCE > storagepoola size 12.5T - > storagepoola used 71.6G - > storagepoola available 12.4T - > storagepoola capacity 0% - > storagepoola altroot - default > storagepoola health ONLINE - > storagepoola guid 13384031601381355037 - > storagepoola version 10 default > storagepoola bootfs - default > storagepoola delegation on default > storagepoola autoreplace off default > storagepoola cachefile - default > storagepoola failmode wait default > > >> I don't know how many concurrent IOs Solaris thinks your ISCSI volumes >> will handle, but that's one area to examine. The only way to realize full >> performance is going to be to get ZFS to issue multiple >> IOs to the ISCSI boxes at once. >> > > I have read the zpool and zfs man pages, but it's still not clear to me how I > can configure ZFS such that it issues concurrent I/O's to the iSCSI boxes ? > > I don't know how to tell ZFS to not issue concurrent I/Os. By default, it will try to queue up to 35 I/Os to each vdev. You should be able to use iostat to examine the queues and vdev performance. -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss