Adam> The challenge right now isn't so much on the NetApp side but on the 
VMWare side.

Adam> Typical sequence of events:
Adam> 1) get list of VMs on datastore X
Adam> 2) quiesce all VMs on datastore X
Adam> 3) snapshot datastore X via NetApp mechanism
Adam> 4) un-quiesce all VMs on datastore X

Adam> What happens is that step 2 takes about 30 seconds per VM. 
Adam> While the VMs are quiesced, they are effectively using VMWare's
Adam> snapshot mechanism to store changed blocks until the NetApp
Adam> snapshot is done.  Step 3 takes a couple of seconds -- not an
Adam> issue.  Step 4, then, has to roll through each VM and remove the
Adam> VMWare snapshot.  The problem here is that the longer they are
Adam> quiesced, the longer they take to come back.  By the time we're
Adam> near the last few VMs, they are taking a long time to roll
Adam> forward and commit those VM snapshot changes.

Adam> We have the option of not quiescing (man, that word gets harder
Adam> to type every time) the VMs and just taking a netapp snapshot,
Adam> which may or may not be fully restorable.  I'm curious if anyone
Adam> else is doing that.

We're going to have the same type of problem down the line too, and
I've used CommVault (on FC SAN volumes), a little bit of Veeam, and
we're moving to Netbackup with Snapmanager on NFS datastores.  

To me, the best thing you could do is to just make more smaller
datastores so that you have fewer VMs per datastore.  It's not ideal
in alot of ways, but maybe if you have so many VMs per-datastore, it's
the best option?

I also think that our VMware guys are also gambling that on most VMs
they can do a restore even if they do a backing store snapshot on the
Netapp (cDOT 8.2.x) in alot of cases.  I do the Netapp side of the
house, not as much on the VMware side day to day.

Is there anyway to parallelize the ESX side, so that it finds the VMs
and then does three or four of them at a time?  Esp if they are on
seperate ESX hosts, that should be doable.

But it's an interesting problem and I don't have a solution either.

John
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to