>>On 24/03/17, 4:18 AM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

>>OK, yeah, it does.
    
>>The source host has access to the source datastore and the destination host 
>>has access to the destination datastore.
>>The source host does not have access to the destination datastore nor does 
>>the destination host have access to the source datastore.
Still this should be supported by CloudStack.
  
>>I've been focusing on doing this with a source and a host datastore that are 
>>both either NFS or iSCSI (but I think you should be able to go NFS to iSCSI 
>>or vice versa, as well).

Mike, I will try this scenario with 4.10 and will share the update.

Regards,
Sateesh
    
    > On Mar 23, 2017, at 4:09 PM, Sergey Levitskiy 
<sergey.levits...@autodesk.com> wrote:
    > 
    > It shouldn’t as long the destination host has access to the destination 
datastore.
    > 
    > On 3/23/17, 1:34 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:
    > 
    >    So, in my case, both the source and target datastores are 
cluster-scoped primary storage in CloudStack (not zone wide). Would that 
matter? For XenServer, that cluster-scoped configuration (but using storage 
repositories, of course) works.
    > 
    >    On 3/23/17, 2:31 PM, "Sergey Levitskiy" 
<sergey.levits...@autodesk.com> wrote:
    > 
    >        It looks like a bug. For vmware, moving root volume with 
migrateVolume with livemigrate=true for zone-wide PS works just fine for us. In 
the background, it uses StoragevMotion. From another angle 
MigrateVirtualMachine works also perfectly fine. I know for a fact that vmware 
supports moving from host to host and storage to storage at the same time so it 
seems to be a bug in migrateVirtualMachineWithVolume implementation. vSphere 
standard license is enough for both regular and storage vMotion.
    > 
    >        On 3/23/17, 1:21 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> 
wrote:
    > 
    >            Thanks, Simon
    > 
    >            I wonder if we support that in CloudStack.
    > 
    >            On 3/23/17, 2:18 PM, "Simon Weller" <swel...@ena.com> wrote:
    > 
    >                Mike,
    > 
    > 
    >                It is possible to do this on vcenter, but it requires a 
special license I believe.
    > 
    > 
    >                Here's the info on it :
    > 
    >                
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html
    > 
    >                
https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html
    > 
    > 
    >                - Si
    >                ________________________________
    >                From: Tutkowski, Mike <mike.tutkow...@netapp.com>
    >                Sent: Thursday, March 23, 2017 3:09 PM
    >                To: dev@cloudstack.apache.org
    >                Subject: Re: Cannot migrate VMware VM with root disk to 
host in different cluster (CloudStack 4.10)
    > 
    >                This is interesting:
    > 
    >                If I shut the VM down and then migrate its root disk to 
storage in the other cluster, then start up the VM, the VM gets started up 
correctly (running on the new host using the other datastore).
    > 
    >                Perhaps you simply cannot live migrate a VM and its 
storage from one cluster to another with VMware? This works for XenServer and I 
probably just assumed it would work in VMware, but maybe it doesn’t?
    > 
    >                The reason I’m asking now is because I’m investigating the 
support of cross-cluster migration of a VM that uses managed storage. This 
works for XenServer as of 4.9 and I was looking to implement similar 
functionality for VMware.
    > 
    >                On 3/23/17, 2:01 PM, "Tutkowski, Mike" 
<mike.tutkow...@netapp.com> wrote:
    > 
    >                    Another piece of info:
    > 
    >                    I tried this same VM + storage migration using NFS for 
both datastores instead of iSCSI for both datastores and it failed with the 
same error message:
    > 
    >                    Required property datastore is missing from data 
object of type VirtualMachineRelocateSpecDiskLocator
    > 
    >                    while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
    >                    at line 1, column 326
    > 
    >                    while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator
    > 
    >                    while parsing serialized DataObject of type 
vim.vm.RelocateSpec
    >                    at line 1, column 187
    > 
    >                    while parsing call information for method 
RelocateVM_Task
    >                    at line 1, column 110
    > 
    >                    while parsing SOAP body
    >                    at line 1, column 102
    > 
    >                    while parsing SOAP envelope
    >                    at line 1, column 38
    > 
    >                    while parsing HTTP request for method relocate
    >                    on object of type vim.VirtualMachine
    >                    at line 1, column 0
    > 
    >                    On 3/23/17, 12:33 PM, "Tutkowski, Mike" 
<mike.tutkow...@netapp.com> wrote:
    > 
    >                        Slight typo:
    > 
    >                        Both ESXi hosts are version 5.5 and both clusters 
are within the same VMware datastore.
    > 
    >                        Should be (datastore changed to datacenter):
    > 
    >                        Both ESXi hosts are version 5.5 and both clusters 
are within the same VMware datacenter.
    > 
    >                        On 3/23/17, 12:31 PM, "Tutkowski, Mike" 
<mike.tutkow...@netapp.com> wrote:
    > 
    >                            A little update here:
    > 
    >                            In the debugger, I made sure we asked for the 
correct source datastore (I edited the UUID we were using for the source 
datastore).
    > 
    >                            When VirtualMachineMO.changeDatastore is later 
invoked having the proper source and target datastores, I now see this error 
message:
    > 
    >                            Virtual disk 'Hard disk 1' is not accessible 
on the host: Unable to access file [SIOC-1]
    > 
    >                            Both ESXi hosts are version 5.5 and both 
clusters are within the same VMware datastore.
    > 
    >                            The source datastore and the target datastore 
are both using iSCSI.
    > 
    >                            On 3/23/17, 11:53 AM, "Tutkowski, Mike" 
<mike.tutkow...@netapp.com> wrote:
    > 
    >                                Also, in case it matters, both datastores 
are iSCSI based.
    > 
    >> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
<mike.tutkow...@netapp.com> wrote:
    >> 
    >> My version is 5.5 in both clusters.
    >> 
    >>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
<sateesh.chodapune...@accelerite.com> wrote:
    >>> 
    >>> 
    >>>>> On 23/03/17, 7:21 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> 
wrote:
    >>> 
    >>>>> However, perhaps someone can clear this up for me:
    >>>>> With XenServer, we are able to migrate a VM and its volumes from a 
host using a shared SR in one cluster to a host using a shared SR in another 
cluster even though the source host can’t see the target SR.
    >>>>> Is the same thing possible with VMware or does the source host have 
to be able to see the target datastore? If so, does that mean the target 
datastore has to be zone-wide primary storage when using VMware to make this 
work?
    >>> Yes, Mike. But that’s the case with versions less than 5.1 only. In 
vSphere 5.1 and later, vMotion does not require environments with shared 
storage. This is useful for performing cross-cluster migrations, when the 
target cluster machines might not have access to the source cluster's storage.
    >>> BTW, what is the version of ESXi hosts in this setup?
    >>> 
    >>> Regards,
    >>> Sateesh,
    >>> CloudStack development,
    >>> Accelerite, CA-95054
    >>> 
    >>>  On 3/23/17, 7:47 AM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> 
wrote:
    >>> 
    >>>      This looks a little suspicious to me (in VmwareResource before we 
call VirtualMachineMO.changeDatastore):
    >>> 
    >>>                      morDsAtTarget = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(tgtHyperHost, 
filerTo.getUuid());
    >>>                      morDsAtSource = 
HypervisorHostHelper.findDatastoreWithBackwardsCompatibility(srcHyperHost, 
filerTo.getUuid());
    >>>                      if (morDsAtTarget == null) {
    >>>                          String msg = "Unable to find the target 
datastore: " + filerTo.getUuid() + " on target host: " + 
tgtHyperHost.getHyperHostName() + " to execute MigrateWithStorageCommand";
    >>>                          s_logger.error(msg);
    >>>                          throw new Exception(msg);
    >>>                      }
    >>> 
    >>>      We use filerTo.getUuid() when trying to get a pointer to both the 
target and source datastores. Since filerTo.getUuid() has the UUID for the 
target datastore, that works for morDsAtTarget, but morDsAtSource ends up being 
null.
    >>> 
    >>>      For some reason, we only check if morDsAtTarget is null (I’m not 
sure why we don’t check if morDsAtSource is null, too).
    >>> 
    >>>      On 3/23/17, 7:31 AM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> 
wrote:
    >>> 
    >>>          Hi,
    >>> 
    >>>          The CloudStack API that the GUI is invoking is 
migrateVirtualMachineWithVolume (which is expected since I’m asking to migrate 
a VM from a host in one cluster to a host in another cluster).
    >>> 
    >>>          A MigrateWithStorageCommand is sent to VmwareResource, which 
eventually calls VirtualMachineMO.changeDatastore.
    >>> 
    >>>              public boolean changeDatastore(VirtualMachineRelocateSpec 
relocateSpec) throws Exception {
    >>>                  ManagedObjectReference morTask = 
_context.getVimClient().getService().relocateVMTask(_mor, relocateSpec, 
VirtualMachineMovePriority.DEFAULT_PRIORITY);
    >>>                  boolean result = 
_context.getVimClient().waitForTask(morTask);
    >>>                  if (result) {
    >>>                      _context.waitForTaskProgressDone(morTask);
    >>>                      return true;
    >>>                  } else {
    >>>                      s_logger.error("VMware RelocateVM_Task to change 
datastore failed due to " + TaskMO.getTaskFailureInfo(_context, morTask));
    >>>                  }
    >>>                  return false;
    >>>              }
    >>> 
    >>>          The parameter, VirtualMachineRelocateSpec, looks like this:
    >>> 
    >>>          http://imgur.com/a/vtKcq (datastore-66 is the target datastore)
    >>> 
    >>>          The following error message is returned:
    >>> 
    >>>          Required property datastore is missing from data object of 
type VirtualMachineRelocateSpecDiskLocator
    >>> 
    >>>          while parsing serialized DataObject of type 
vim.vm.RelocateSpec.DiskLocator
    >>>          at line 1, column 327
    >>> 
    >>>          while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator
    >>> 
    >>>          while parsing serialized DataObject of type vim.vm.RelocateSpec
    >>>          at line 1, column 187
    >>> 
    >>>          while parsing call information for method RelocateVM_Task
    >>>          at line 1, column 110
    >>> 
    >>>          while parsing SOAP body
    >>>          at line 1, column 102
    >>> 
    >>>          while parsing SOAP envelope
    >>>          at line 1, column 38
    >>> 
    >>>          while parsing HTTP request for method relocate
    >>>          on object of type vim.VirtualMachine
    >>>          at line 1, column 0
    >>> 
    >>>          Thoughts?
    >>> 
    >>>          Thanks!
    >>>          Mike
    >>> 
    >>>          On 3/22/17, 11:50 PM, "Sergey Levitskiy" 
<sergey.levits...@autodesk.com> wrote:
    >>> 
    >>> 
    >>>              Can you trace which API call being used and what 
parameters were specified? migrateVirtualMachineWithVolumeAttempts vs 
migrateVirtualMachine
    >>> 
    >>> 
    >>> 
    >>> 
    >>> 
    >>> 
    >>> 
    >>> 
    >>> 
    >>> 
    >>> 
    >>> 
    >>> DISCLAIMER
    >>> ==========
    >>> This e-mail may contain privileged and confidential information which 
is the property of Accelerite, a Persistent Systems business. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    > 
    




DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.

Reply via email to