Yes, this is part of the refactoring principal from new VM state
synchronization framework. We serialize operations happening to the VM,
for each individual operation flow, it has a clear context to perform its
task without worrying about disruption from other flows, therefore create
a loosely couple way of doing orchestration.

We used to synchronize based on DB lock table, the drawback of this
approach is that not only we may have multiple flows to continuously pull
database, but also that we have very few control over the orchestration
flows and manage system load accordingly.

The new way can let us explore possibility of things like having better
visibility on orchestration flows, canceling job, load throttling etc, and
these abilities can be achieved through improving the job queue facility
itself without impact to individual orchestration flows.

Kelven

On 7/14/14, 3:13 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com> wrote:

>I see...so, in case, for example, this API call is being executed in
>multiple places at the same time, we are making sure we send the commands
>to the VM in a serial fashion.
>
>
>On Mon, Jul 14, 2014 at 4:09 PM, Kelven Yang <kelven.y...@citrix.com>
>wrote:
>
>> Mike,
>>
>> This is related to serializing activities to the VM. When VM has
>>multiple
>> disks and volume-semantic API could create situations that there exist
>> multiple volume operations happening at the same time on the same VM.
>>
>> Kelven
>>
>> On 7/14/14, 2:23 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com>
>> wrote:
>>
>> >Hi,
>> >
>> >I have a question about this logic (related to resizing a volume):
>> >
>> >            AsyncJobExecutionContext jobContext =
>> >AsyncJobExecutionContext.getCurrentExecutionContext();
>> >
>> >
>> >            if (!VmJobEnabled.value() ||
>> >jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
>> >
>> >                // avoid re-entrance
>> >
>> >
>> >                VmWorkJobVO placeHolder = null;
>> >
>> >
>> >                if (VmJobEnabled.value()) {
>> >
>> >                    placeHolder =
>>createPlaceHolderWork(userVm.getId());
>> >
>> >                }
>> >
>> >
>> >                try {
>> >
>> >                    return orchestrateResizeVolume(volume.getId(),
>> >currentSize, newSize,
>> >
>> >                        newDiskOffering != null ?
>> >cmd.getNewDiskOfferingId() : null, shrinkOk);
>> >
>> >                } finally {
>> >
>> >                    if (VmJobEnabled.value()) {
>> >
>> >                        _workJobDao.expunge(placeHolder.getId());
>> >
>> >                    }
>> >
>> >                }
>> >
>> >
>> >            } else {
>> >
>> >                Outcome<Volume> outcome =
>> >resizeVolumeThroughJobQueue(userVm.getId(), volume.getId(),
>>currentSize,
>> >newSize,
>> >
>> >                        newDiskOffering != null ?
>> >cmd.getNewDiskOfferingId() : null, shrinkOk);
>> >
>> >
>> >Why would one resize the volume via the job queue versus the other
>>path?
>> >
>> >Thanks!
>> >--
>> >*Mike Tutkowski*
>> >*Senior CloudStack Developer, SolidFire Inc.*
>> >e: mike.tutkow...@solidfire.com
>> >o: 303.746.7302
>> >Advancing the way the world uses the cloud
>> ><http://solidfire.com/solution/overview/?video=play>* *
>>
>>
>
>
>-- 
>*Mike Tutkowski*
>*Senior CloudStack Developer, SolidFire Inc.*
>e: mike.tutkow...@solidfire.com
>o: 303.746.7302
>Advancing the way the world uses the cloud
><http://solidfire.com/solution/overview/?video=play>**

Reply via email to