On 11/7/2012 10:02 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
I formerly did exactly the same thing. Of course performance is abysmal
because you're booting a guest VM to share storage back to the host where the
actual VM's run. Not to mention, there's the startup dependency, which is
annoying to work around. But yes it works.
I'm curious here. Your experience is 180 degrees opposite from mine. I
run an all in one in production and I get native disk performance, and
ESXi virtual disk I/O is faster than with a physical SAN/NAS for the NFS
datastore, since the traffic never leaves the host (I get 3gb/sec or so
usable thruput.) One essential (IMO) for this is passing an HBA into
the SAN/NAS VM using vt-d technology. If you weren't doing this, I'm
not surprised the performance sucked. If you were doing this, you were
obviously doing something wrong. No offense, but quite a few people are
doing exactly what I describe and it works just fine - there IS the
startup dependency. but can live with that...
1: If you where given the same hardware, what would you do? (RAID card is
an extra EUR30 or so a month, which i don't really want to spend, but could, if
needs be...)
I have abandoned ESXi in favor of openindiana or solaris running as the host,
with virtualbox running the guests. I am SOOOO much happier now. But it takes
a higher level of expertise than running ESXi, but the results are much better.
in what respect? due to the 'abysmal performance'?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss