Ah, right - my mistake. I should have thought of that.

On Wed, Mar 19, 2014 at 11:29 PM, Harikrishna Patnala <
harikrishna.patn...@citrix.com> wrote:

> There are two different APIs RestoreVMCmd and RecoverVMCmd
>
> RestoreVMCmd is what we are talking about here to restore the VM to fresh
> new root disk
> RecoverVMCmd is used to recover the vm back after we try to destroy the VM
> (within the time after Destroy and before Expunge)
>
> Thanks
> Harikrishna
>
>
> On 20-Mar-2014, at 10:49 am, Mike Tutkowski <mike.tutkow...@solidfire.com>
> wrote:
>
> > I see the actual API command is RecoverVMCmd.
> >
> > I wonder why the GUI says Reset instead of Recover.
> >
> >
> > On Wed, Mar 19, 2014 at 10:59 PM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> >> Please correct me if I'm wrong, but there does not appear to be a way to
> >> "save" the old root disk once it has gone into the Destroy state in this
> >> situation, is there?
> >>
> >> In other words, the new root disk is created, the old is put into the
> >> Destroy state, and the old will get deleted at the next clean-up
> cycle...no
> >> chance to restore that volume (even for use as a data disk).
> >>
> >>
> >> On Wed, Mar 19, 2014 at 10:33 PM, Mike Tutkowski <
> >> mike.tutkow...@solidfire.com> wrote:
> >>
> >>> OK, I went back an re-ran my test.
> >>>
> >>> I see how this works now.
> >>>
> >>> I was aware that volumes in the Destroy state get expunged by a
> >>> background thread at some point; however, what tricked me here is that
> my
> >>> "old" root disk no longer showed up in the Storage tab of the GUI.
> >>>
> >>> When I looked in the volumes table, though, I saw that that disk was in
> >>> the Destroy state.
> >>>
> >>> I speed up the frequency of the clean-up background thread to run once
> >>> every minute and I saw the old root disk got put into the Expunged
> state
> >>> (as you'd expect, it was no longer present in the SR).
> >>>
> >>>
> >>> On Wed, Mar 19, 2014 at 7:06 PM, Mike Tutkowski <
> >>> mike.tutkow...@solidfire.com> wrote:
> >>>
> >>>> Yeah, usually "reset" (for hypervisors) means "shut down the VM and
> >>>> re-start it."
> >>>>
> >>>>
> >>>> On Wed, Mar 19, 2014 at 6:22 PM, Marcus <shadow...@gmail.com> wrote:
> >>>>
> >>>>> +1 to reset being a bad verb for this. It's too late now, however.
> >>>>>
> >>>>> On Wed, Mar 19, 2014 at 6:22 PM, Marcus <shadow...@gmail.com> wrote:
> >>>>>> The storage gets marked as 'Destroy' state. Then it goes to
> >>>>>> 'Expunging' when the storage cleanup interval occurs. I've actually
> >>>>>> thought about leveraging that for data disks, the current delete
> data
> >>>>>> disk immediately cleans up the disk, when we could create an api
> call
> >>>>>> that just moves the data disk to destroy state. Then there'd
> actually
> >>>>>> be room for an 'undo' operation where the state could be moved back
> to
> >>>>>> Ready, so long as the cleanup hasn't occurred.
> >>>>>>
> >>>>>> On Wed, Mar 19, 2014 at 4:43 PM, Nitin Mehta <
> nitin.me...@citrix.com>
> >>>>> wrote:
> >>>>>>> Please feel free to open a documentation bug on JIRA if the info
> >>>>> doesn't
> >>>>>>> exist.
> >>>>>>>
> >>>>>>> On 19/03/14 3:16 PM, "Mike Tutkowski" <
> mike.tutkow...@solidfire.com>
> >>>>> wrote:
> >>>>>>>
> >>>>>>>> Thanks for that background-cleanup info. I was not aware of that.
> >>>>>>>>
> >>>>>>>> I'll probably take a look into it and see how that works.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Wed, Mar 19, 2014 at 4:13 PM, Alena Prokharchyk <
> >>>>>>>> alena.prokharc...@citrix.com> wrote:
> >>>>>>>>
> >>>>>>>>> CS destroys the Root volume in CS DB, then its up to the storage
> >>>>> pool
> >>>>>>>>> cleanup task to clean it up on the backend. This is a background
> >>>>> task
> >>>>>>>>> running every storage.cleanup.interval seconds.
> >>>>>>>>>
> >>>>>>>>> For how long do you see the volume being present on the SR?
> >>>>>>>>>
> >>>>>>>>> On 3/19/14, 3:03 PM, "Mike Tutkowski" <
> >>>>> mike.tutkow...@solidfire.com>
> >>>>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>>> OK, sounds good; however, if this is desired behavior, does
> >>>>> anyone know
> >>>>>>>>>> why
> >>>>>>>>>> we abandon the old root disk in the XenServer SR? It seems that
> >>>>>>>>> CloudStack
> >>>>>>>>>> "forgets" about it and it just stays in the SR taking up space.
> >>>>>>>>>>
> >>>>>>>>>> Do people think it should be deleted?
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On Wed, Mar 19, 2014 at 3:49 PM, Nitin Mehta <
> >>>>> nitin.me...@citrix.com>
> >>>>>>>>>> wrote:
> >>>>>>>>>>
> >>>>>>>>>>> I think that's what it is supposed to do. It discards the old
> >>>>> root
> >>>>>>>>> disk
> >>>>>>>>>>> and creates a fresh root disk for the vm and in case an
> optional
> >>>>>>>>> field
> >>>>>>>>>>> template id is passed in the root disk is created from this new
> >>>>>>>>> template
> >>>>>>>>>>> id.
> >>>>>>>>>>> The api name is restoreVirtualMachine. Please check that the UI
> >>>>> is
> >>>>>>>>>>> internally invoking this api
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks,
> >>>>>>>>>>> -Nitin
> >>>>>>>>>>>
> >>>>>>>>>>> On 19/03/14 1:55 PM, "Mike Tutkowski" <
> >>>>> mike.tutkow...@solidfire.com>
> >>>>>>>>>>> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>> Hi,
> >>>>>>>>>>>>
> >>>>>>>>>>>> I noticed today while running through some test cases for 4.4
> >>>>> that
> >>>>>>>>>>>> resetting a VM does not work as expected.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Instead of the typical stop and re-start behavior where the VM
> >>>>> is
> >>>>>>>>>>> booted
> >>>>>>>>>>>> back up using the same root disk, the VM gets a new root disk
> >>>>> when
> >>>>>>>>> it
> >>>>>>>>>>> is
> >>>>>>>>>>>> booted back up.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Can anyone confirm this finding for me with his or her setup?
> >>>>>>>>>>>>
> >>>>>>>>>>>> Thanks!
> >>>>>>>>>>>>
> >>>>>>>>>>>> --
> >>>>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>>>> e: mike.tutkow...@solidfire.com
> >>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>>>> *(tm)*
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> --
> >>>>>>>>>> *Mike Tutkowski*
> >>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>>>> e: mike.tutkow...@solidfire.com
> >>>>>>>>>> o: 303.746.7302
> >>>>>>>>>> Advancing the way the world uses the
> >>>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>>>> *(tm)*
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> *Mike Tutkowski*
> >>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> e: mike.tutkow...@solidfire.com
> >>>>>>>> o: 303.746.7302
> >>>>>>>> Advancing the way the world uses the
> >>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> *(tm)*
> >>>>>>>
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> *Mike Tutkowski*
> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>> e: mike.tutkow...@solidfire.com
> >>>> o: 303.746.7302
> >>>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>>> *(tm)*
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> *Mike Tutkowski*
> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> e: mike.tutkow...@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>> *(tm)*
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkow...@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *(tm)*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkow...@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *(tm)*
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*(tm)*

Reply via email to