The storage gets marked as 'Destroy' state. Then it goes to
'Expunging' when the storage cleanup interval occurs. I've actually
thought about leveraging that for data disks, the current delete data
disk immediately cleans up the disk, when we could create an api call
that just moves the data disk to destroy state. Then there'd actually
be room for an 'undo' operation where the state could be moved back to
Ready, so long as the cleanup hasn't occurred.

On Wed, Mar 19, 2014 at 4:43 PM, Nitin Mehta <nitin.me...@citrix.com> wrote:
> Please feel free to open a documentation bug on JIRA if the info doesn't
> exist.
>
> On 19/03/14 3:16 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com> wrote:
>
>>Thanks for that background-cleanup info. I was not aware of that.
>>
>>I'll probably take a look into it and see how that works.
>>
>>
>>On Wed, Mar 19, 2014 at 4:13 PM, Alena Prokharchyk <
>>alena.prokharc...@citrix.com> wrote:
>>
>>> CS destroys the Root volume in CS DB, then its up to the storage pool
>>> cleanup task to clean it up on the backend. This is a background task
>>> running every storage.cleanup.interval seconds.
>>>
>>> For how long do you see the volume being present on the SR?
>>>
>>> On 3/19/14, 3:03 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com>
>>> wrote:
>>>
>>> >OK, sounds good; however, if this is desired behavior, does anyone know
>>> >why
>>> >we abandon the old root disk in the XenServer SR? It seems that
>>>CloudStack
>>> >"forgets" about it and it just stays in the SR taking up space.
>>> >
>>> >Do people think it should be deleted?
>>> >
>>> >
>>> >On Wed, Mar 19, 2014 at 3:49 PM, Nitin Mehta <nitin.me...@citrix.com>
>>> >wrote:
>>> >
>>> >> I think that's what it is supposed to do. It discards the old root
>>>disk
>>> >> and creates a fresh root disk for the vm and in case an optional
>>>field
>>> >> template id is passed in the root disk is created from this new
>>>template
>>> >> id.
>>> >> The api name is restoreVirtualMachine. Please check that the UI is
>>> >> internally invoking this api
>>> >>
>>> >> Thanks,
>>> >> -Nitin
>>> >>
>>> >> On 19/03/14 1:55 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com>
>>> >> wrote:
>>> >>
>>> >> >Hi,
>>> >> >
>>> >> >I noticed today while running through some test cases for 4.4 that
>>> >> >resetting a VM does not work as expected.
>>> >> >
>>> >> >Instead of the typical stop and re-start behavior where the VM is
>>> >>booted
>>> >> >back up using the same root disk, the VM gets a new root disk when
>>>it
>>> >>is
>>> >> >booted back up.
>>> >> >
>>> >> >Can anyone confirm this finding for me with his or her setup?
>>> >> >
>>> >> >Thanks!
>>> >> >
>>> >> >--
>>> >> >*Mike Tutkowski*
>>> >> >*Senior CloudStack Developer, SolidFire Inc.*
>>> >> >e: mike.tutkow...@solidfire.com
>>> >> >o: 303.746.7302
>>> >> >Advancing the way the world uses the
>>> >> >cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> >*(tm)*
>>> >>
>>> >>
>>> >
>>> >
>>> >--
>>> >*Mike Tutkowski*
>>> >*Senior CloudStack Developer, SolidFire Inc.*
>>> >e: mike.tutkow...@solidfire.com
>>> >o: 303.746.7302
>>> >Advancing the way the world uses the
>>> >cloud<http://solidfire.com/solution/overview/?video=play>
>>> >*(tm)*
>>>
>>>
>>
>>
>>--
>>*Mike Tutkowski*
>>*Senior CloudStack Developer, SolidFire Inc.*
>>e: mike.tutkow...@solidfire.com
>>o: 303.746.7302
>>Advancing the way the world uses the
>>cloud<http://solidfire.com/solution/overview/?video=play>
>>*(tm)*
>

Reply via email to