I see what you're saying, Marcus.
That makes sense. If it's just marked as deleted, sync is the right way to
go.
I do know for 4.2 in Edison's storage framework that my plug-in is invoked
upon deletion of a CloudStack volume to delete the volume on the SAN (so it
appears to be more than marking t
Well, if it doesn't actually delete the volume, just mark it 'destroy'
so that the cleanup thread takes care of it, then the api call can
stay sync, since it's just changing a database entry. If it does
actually do the work right then, then we will need to make it async. I
haven't even looked at 4.
If it's a long-running op to delete a volume (which is can be), I would say
it should be async.
On Thu, Jun 6, 2013 at 9:25 AM, Marcus Sorensen wrote:
> So does it just need to be async, or is deleteVolume doing too much in
> both moving the volume to destroy state and expunging? If I transitio
So does it just need to be async, or is deleteVolume doing too much in
both moving the volume to destroy state and expunging? If I transition
a volume to 'Destroy' state, the storage cleanup thread comes along
and deletes it for me later, similar to how the VMs are expunged. This
seems preferable,
Hey Marcus,
To me, it seems like it should be async, as well.
As far as I know (at least in pre 4.2), unless you are deleting a volume
that has never been attached to a VM, the CS MS would have to have the
hypervisor perform some operation upon the deletion of a CloudStack
volume...and that could
Oh, I should add that I traced it through the system, and it actually
sends a DeleteVolumeCommand to the agent. That has to finish before
the sync call completes.
This is on 4.1, if it changes significantly with the storage refactor,
that's fine, but I'd like to know if there was a reason for it i
Just wondering why deleteVolume is a sync call. It doesn't seem to
adhere to the 'mark it removed, let a worker expunge it later after X
seconds' paradigm. I only noticed this when a storage system was
taking a bit to do the work and thus blocking the API call.