> -----Original Message-----
> From: zfs-discuss-boun...@opensolaris.org 
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Kraus
> Sent: Wednesday, August 11, 2010 3:53 PM
> To: ZFS Discussions
> Subject: [zfs-discuss] ZFS and VMware
> 
>        I am looking for references of folks using ZFS with either NFS
> or iSCSI as the backing store for VMware (4.x) backing store for
> virtual machines. We asked the local VMware folks and they had not
> even heard of ZFS. Part of what we are looking for is a recommendation
> for NFS or iSCSI, and all VMware would say is "we support both". We
> are currently using Sun SE-6920, 6140, and 2540 hardware arrays via
> FC. We have started playing with ZFS/NFS, but have no experience with
> iSCSI. The ZFS backing store in some cases will be the hardware arrays
> (the 6920 has fallen off of VMware's supported list and if we front
> end it with either NFS or iSCSI it'll be supported, and VMware
> suggested that) and some of it will be backed by J4400 SATA disk.
> 
>        I have seen some discussion of this here, but it has all been
> related to very specific configurations and issues, I am looking for
> general recommendations and experiences. Thanks.
> 

It really depends on your VM system, what you plan on doing with VMs and how 
you plan to do it. 

I have the vSphere Enterprise product and I am using the DRS feature, so VMs 
are vmotioned around my cluster all throughout the day. All of my VM users are 
able to create and manage their own VMs through the vSphere client. None of 
them care to know anything about VM storage as long as it's fast, and most of 
them don't want to have to make choices about which datastore to put their new 
VM on. Only 30-40% of the total number of VMs registered in the cluster are 
powered on at any given time. 

I am using OpenSolaris and ZFS to provide a relatively small NFS datastore as a 
proof of concept. I am trying to demonstrate that it's a better solution for us 
than our existing solution, which is Windows Storage Server and the MS iSCSI 
Software Target. The ZFS-based datastore is hosted on six 146GB 10krpm SAS 
drives configured as a 3x2 mirror, a 30GB SSD as L2ARC and a 1GB ramdisk as the 
SLOG. Deduplication and compression (lzjb) are enabled. The server itself is a 
dual quad core core2-level system with 48GB RAM - it is going to be a VM host 
in the cluster after this project is concluded. 

Based on the experience and information I've gathered thus far, here is what I 
think:

The biggest thing for me is that I think I will be able to use deduplication, 
compression and a bunch of mirror vdevs using ZFS, whereas with other products 
I would need to use RAID 5 or 6 to get enough capacity with my budget. 
Larger/cheaper drives are also a possibility with ZFS since 
dedup/compression/ARC/L2ARC cuts down on IO to the disk. 

NFS Pros: NFS is much easier/faster to configure. Dedup and compression work 
better as the VM files sit directly on the filesystem. There is potential for 
faster provisioning by doing a local copy vs. having VMware do it remotely over 
NFS. It's nice to be able to get at each VM file directly from the fs as 
opposed to remotely via the vSphere client or service console (which will 
disappear with the next VMware release). You can use fast SSD SLOGs to 
accelerate most (all?) writes to the ZIL.

NFS Cons: VMware does not do NFSv4, so each of your filesystems will require a 
separate mount. There is a maximum number of mounts per cluster (64 with 
vSphere 4.0). There is no opportunity for load balancing between the client and 
a single datastore. VMware makes all writes synchronous writes, so you really 
need to have SLOGs (or a RAID controller with BBWC) to make the hosted VMs 
usable. VMware does not give you any vm-level disk performance statistics for 
VMs on an NFS datastore (at least, not through the client).

iSCSI Pros: COMSTAR rocks - you can set up your LUNs on an iSCSI target today, 
and move them to FC/FCoE/SRP tomorrow. Cloning zvols is fast, which could be 
leveraged for fast VM provisioning. iSCSI supports multipathing, so you can 
take advantage of VMware built-in NMP to do load balancing. You don't need a 
SLOG as much because you'll only have synchronous writes if the VM requests 
them.

iSCSI Cons: It's harder to take full advantage of dedup and compression. Basic 
configuration is not hard, but is still much more complicated than NFS. Other 
than that, the rest of the cons are all VMware related. vSphere has a limit of 
256 LUNs per host, which in a cluster supporting vmotion basically means 256 
LUNs per cluster. This limit may mean that cloning zvols to speed up VM 
provisioning is not possible. You can have multiple VMs per LUN using VMFS, but 
if you make LUNs too large you run into locking issues when provisioning - 
general wisdom is to keep LUN sizes as small as possible while not going over 
256. This means your storage is chopped up into little pieces, which may be 
annoying to deal with. The worst thing I've experienced with iSCSI and VMFS is 
LUN resignaturing - if you move a LUN from one host (target?) to another, 
VMware is going to think it's a copy, is going to want to resignature VMFS, and 
is going to want you to reregister every VM in the filesystem. 
 vSphere 4.0 is supposed to offer you the ability to mount a VMFS without 
resignaturing, but I've only been able to get this to work on a single host, 
not on every host in a cluster. Resignaturing is really painful.

Beyond the above, there are some possibilities for the future that may also 
inform your decision. 

If VMware ever releases an NFSv4 or NFSv4.1 client, we would get to have 
multiple filesystems per NFS mount and/or pNFS, either of which would be great. 
Multiple filesystems per mount would allow provisioning by cloning filesystems 
(one VM or VM group per filesystem) or filesystem-level snapshots instead of 
VMware snapshots. Since vSphere 4.1 was released a couple of weeks ago without 
NFSv4 support, I would not anticipate this becoming available until 4.2 or 
whatever their next major release is. 

With vSphere 4.1 VMware has introduced the vStorage API for Array Integration 
(VAAI), which seems to be fancy marketing wrapped around the implementation of 
a few 'optional' SCSI protocol commands. VAAI claims to accelerate provisioning 
and provide block-level locking for VMFS when used with compatible storage. If 
COMSTAR does or will implement these commands, I think large iSCSI LUNs become 
a lot easier to deal with and very attractive.

Overall I think NFS and iSCSI are both excellent ways to get VMware using ZFS 
as a datastore. Hope the above is helpful to you in making a decision.

-Will
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to