On Tue, Aug 4, 2009 at 11:21 AM, Ross Walker<rswwal...@gmail.com> wrote:
> On Tue, Aug 4, 2009 at 9:57 AM, Charles Baker<no-re...@opensolaris.org> wrote:
>>> My testing has shown some serious problems with the
>>> iSCSI implementation for OpenSolaris.
>>>
>>> I setup a VMware vSphere 4 box with RAID 10
>>> direct-attached storage and 3 virtual machines:
>>> - OpenSolaris 2009.06 (snv_111b) running 64-bit
>>> - CentOS 5.3 x64 (ran yum update)
>>> - Ubuntu Server 9.04 x64 (ran apt-get upgrade)
>>>
>>> I gave each virtual 2 GB of RAM, a 32 GB drive and
>>> setup a 16 GB iSCSI target on each (the two Linux vms
>>> used iSCSI Enterprise Target 0.4.16 with blockio).
>>> VMware Tools was installed on each. No tuning was
>>> done on any of the operating systems.
>>>
>>> I ran two tests for write performance - one one the
>>> server itself and one from my Mac connected via
>>> Gigabit (mtu of 1500) iSCSI connection using
>>> globalSAN’s latest initiator.
>>>
>>> Here’s what I used on the servers:
>>> time dd if=/dev/zero of=/root/testfile bs=1048576k
>>> count=4
>>> and the Mac OS with the iSCSI connected drive
>>> (formatted with GPT / Mac OS Extended journaled):
>>> time dd if=/dev/zero of=/Volumes/test/testfile
>>> bs=1048576k count=4
>>>
>>> The results were very interesting (all calculations
>>> using 1 MB = 1,084,756 bytes)
>>>
>>> For OpenSolaris, the local write performance averaged
>>> 86 MB/s. I turned on lzjb compression for rpool (zfs
>>> set compression=lzjb rpool) and it went up to 414
>>> MB/s since I’m writing zeros). The average
>>> performance via iSCSI was an abysmal 16 MB/s (even
>>> with compression turned on - with it off, 13 MB/s).
>>>
>>> For CentOS (ext3), local write performance averaged
>>> 141 MB/s. iSCSI performance was 78 MB/s (almost as
>>> fast as local ZFS performance on the OpenSolaris
>>> server when compression was turned off).
>>>
>>> Ubuntu Server (ext4) had 150 MB/s for the local
>>> write. iSCSI performance averaged 80 MB/s.
>>>
>>> One of the main differences between the three virtual
>>> machines was that the iSCSI target on the Linux
>>> machines used partitions with no file system. On
>>> OpenSolaris, the iSCSI target created sits on top of
>>> ZFS. That creates a lot of overhead (although you do
>>> get some great features).
>>>
>>> Since all the virtual machines were connected to the
>>> same switch (with the same MTU), had the same amount
>>> of RAM, used default configurations for the operating
>>> systems, and sat on the same RAID 10 storage, I’d say
>>> it was a pretty level playing field.
>>>
>>> While jumbo frames will help iSCSI performance, it
>>> won’t overcome inherit limitations of the iSCSI
>>> target’s implementation.
>
> If you want to host your VMs from Solaris (Open or not) use NFS right
> now as the iSCSI implementation is still quite a bit immature and
> won't perform nearly as good as the Linux implementation. Until
> comstar stabilizes and replaces iscsitgt I would hold off on iSCSI on
> Solaris.

This sounds crazy, but I was wondering if someone has tried running
Linux iSCSI from within a domU in Xen on OpenSolaris 2009.06 to a ZVOL
on dom0.

Of course the zpool still needs NVRAM or SSD ZIL to perform well, but
if the Xen dom0 is stable it and the crossbow networking works well,
this could allow the best of both worlds.

-Ross
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to