On Aug 21, 2010, at 4:40 PM, Richard Elling <rich...@nexenta.com> wrote:
> On Aug 21, 2010, at 10:14 AM, Ross Walker wrote: >> I'm planning on setting up an NFS server for our ESXi hosts and plan on >> using a virtualized Solaris or Nexenta host to serve ZFS over NFS. > > Please follow the joint EMC+NetApp best practices for VMware ESX servers. > The recommendations apply to any NFS implementation for ESX. Thanks, I'll check that out! Always looking for advice on how best to tweak NFS for ESX. I have a current ZFS over NFS implementation, but on direct attached storage using Sol10. I will be interested to see how Nexenta compares. >> The storage I have available is provided by Equallogic boxes over 10Gbe >> iSCSI. >> >> I am trying to figure out the best way to provide both performance and >> resiliency given the Equallogic provides the redundancy. >> >> Since I am hoping to provide a 2TB datastore I am thinking of carving out >> either 3 1TB luns or 6 500GB luns that will be RDM'd to the storage VM and >> within the storage server setting up either 1 raidz vdev with the 1TB luns >> (less RDMs) or 2 raidz vdevs with the 500GB luns (more fine grained >> expandability, work in 1TB increments). >> >> Given the 2GB of write-back cache on the Equallogic I think the integrated >> ZIL would work fine (needs benchmarking though). > > This should work fine. > >> The vmdk files themselves won't be backed up (more data then I can store), >> just the essential data contained within, so I would think resiliency would >> be important here. >> >> My questions are these. >> >> Does this setup make sense? > > Yes, it is perfectly reasonable. > >> Would I be better off forgoing resiliency for simplicity, putting all my >> faith into the Equallogic to handle data resiliency? > > I don't have much direct experience with Equillogic, but I would expect that > they do a reasonable job of protecting data, or they would be out of business. > > You can also use the copies parameter to set extra redundancy for the > important > files. ZFS will also tell you if corruption is found in a single file, so > that you can > recover just the file and not be forced to recover everything else. I think > this fits > into your back strategy. I thought of the copies parameter, but figured a raidz laid on top of the storage pool would only waste 33% instead of 50% and since this is on top of a conceptually single RAID volume the IOPS bottleneck won't come into play since the any single drive IOPS will be equal to the array IOPS as a whole. >> Will this setup perform? Anybody with experience in this type of setup? > > Many people are quite happy with RAID arrays and still take advantage of > the features of ZFS: checksums, snapshots, clones, send/receive, VMware > integration, etc. The decision of where to implement data protection (RAID) > is not as important as the decision to protect your data. > > My advice: protect your data. Always good advice. So I suppose this just confirms my analysis. Thanks, -Ross _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss