On Dec 8, 2010, at 11:41 PM, Edward Ned Harvey 
<opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:

> For anyone who cares:
> 
> I created an ESXi machine.  Installed two guest (centos) machines and
> vmware-tools.  Connected them to each other via only a virtual switch.  Used
> rsh to transfer large quantities of data between the two guests,
> unencrypted, uncompressed.  Have found that ESXi virtual switch performance
> peaks around 2.5Gbit.
> 
> Also, if you have a NFS datastore, which is not available at the time of ESX
> bootup, then the NFS datastore doesn't come online, and there seems to be no
> way of telling ESXi to make it come online later.  So you can't auto-boot
> any guest, which is itself stored inside another guest.
> 
> So basically, if you want a layer of ZFS in between your ESX server and your
> physical storage, then you have to have at least two separate servers.  And
> if you want anything resembling actual disk speed, you need infiniband,
> fibre channel, or 10G ethernet.  (Or some really slow disks.)   ;-)

Besides the chicken and egg scenario that Ed mentions there is also the CPU 
usage that running the storage virtualized. You might find that as you get more 
machines on the storage the performance will decrease a lot faster then it 
otherwise would if it were standalone as it competes with the very machines it 
is suppose to be serving.

-Ross

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to