Emil, > If false, try setting it to true and see if that works.
I may have mistakenly said it the other way round, if set to true, VM operations will execute in sequence meaning multiple VMs cannot be launched at the same time on 1 host. The default for this parameter is false. I don't know for sure if this parameter affects volume operations so wanted you to try out. Regards, Somesh -----Original Message----- From: Emil [mailto:[email protected]] Sent: Thursday, October 15, 2015 12:07 AM To: [email protected] Subject: Re: Volume migration not working in asynchronous way | Urgent help ! Update: the parameter "vmware.create.full.clone" was set to "true", didn't help either. But noticed something else, now when "full.clone" is enabled, I couldn't deploy two instances simultaneously !! It seems like there is some kind of limitation or restriction with "time"/"heavy tasks"/"waiting", dont know what ! And you can see also a message from this guy in 9 Oct: Timothy Lothering Re: Running Multiple Jobs in parallel Is there any kind of limitation/restriction with vMware with tasks that takes time ? 2015-10-14 0:04 GMT+03:00 ilya <[email protected]>: > Somesh, > > Not certain how that is going to help the problem with concurrent > tasks.. Please explain.. > > Thanks, > ilya > > On 10/13/15 1:58 PM, Somesh Naidu wrote: > > Can you check what value is set for global configuration parameter > "vmware.create.full.clone"? If false, try setting it to true and see if > that works. > > > > Regards, > > Somesh > > > > -----Original Message----- > > From: Emil [mailto:[email protected]] > > Sent: Tuesday, October 13, 2015 3:08 PM > > To: [email protected] > > Subject: Volume migration not working in asynchronous way | Urgent help ! > > > > Hello, > > > > We have a big problem in our environment which not letting us keep > working. > > We are trying to use the command migrateVolume which is written as > > asynchronous in cloudstack API. > > > > But in some way every time we try to run the command (through the API or > > Web) > > only 1x migration running at a time (At very very rare situations 2x > > migrations running simultaneously). > > > > I tried to check every configuration settings especially those: > > > > - execute.in.sequence.hypervisor.commands > > - execute.in.sequence.network.element.commands > > > > They were defaulted as false, I tried them with true as value also didnt > > worked. > > > > Also if we are trying to run storage vMotion through the vSphere client, > it > > working great (3,4,5 simultaneously) > > > > Some details about our environment: > > > > vSphere 5.5 > > CS 4.5.2 > > > > Some logs conclusions: > > > > When preforming the first volumeMigration it seems like each task has a > > sequence id, which registerd some where, > > then the task "Executing" it self and every thing fine. Got unique job id > > for the task (to trace the status). > > > > When the second volumeMigration executed there is a line in the log which > > very suspicious and it says: > > "Waiting for Seq 244522346426 Scheduling: .." > > That sequence id is the id of the first migration ! Why the hell he > waiting > > for the first task ?! > > > > I hope some one will see ot and can help. Very urgent case. > > >
