On 11/8/2012 12:35 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
hardware access to the HBA(s) and harddisks at raw speeds, with no
extra layers of lags in between.
Ah. But even with PCI pass-thru, you're still limited by the virtual LAN switch
that connects ESXi to the ZFS guest via NFS. When I connected ESXi and a guest
this way, obviously your bandwidth between the host& guest is purely CPU and
memory limited. Because you're not using a real network interface; you're just
emulating the LAN internally. I streamed data as fast as I could between ESXi and
a guest, and found only about 2-3 Gbit. That was over a year ago so I forget
precisely how I measured it ... NFS read/write perhaps, or wget or something. I
know I didn't use ssh or scp, because those tend to slow down network streams quite
a bit. The virtual network is a bottleneck (unless you're only using 2 disks, in
which case 2-3 Gbit is fine.)
I think THIS is where we're disagreeing: I'm saying "Only 2-3 gbit" but I see Dan's email said
" since the traffic never leaves the host (I get 3gb/sec or so usable thruput.)" and "No
offense, but quite a few people are doing exactly what I describe and it works just fine..."
It would seem we simply have different definitions of "fine" and "abysmal."
;-)
Now you have me totally confused. How does your setup get data from the
guest to the OI box? If thru a wire, if it's gig-e, it's going to be
1/3-1/2 the speed of the other way. If you're saying you use 10gig or
some-such, we're talking about a whole different animal.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss