Mike - There is a way to restore disks in destroyed state before they are expunged. It requires shutting down management server, modifying database directly, and keeping a good stock of potential offerings near your data recovery shrine.
I’m going to be covering this in my CCC Denver talk. John On Mar 19, 2014, at 9:59 PM, Mike Tutkowski <mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com>> wrote: Please correct me if I'm wrong, but there does not appear to be a way to "save" the old root disk once it has gone into the Destroy state in this situation, is there? In other words, the new root disk is created, the old is put into the Destroy state, and the old will get deleted at the next clean-up cycle...no chance to restore that volume (even for use as a data disk). On Wed, Mar 19, 2014 at 10:33 PM, Mike Tutkowski < mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com>> wrote: OK, I went back an re-ran my test. I see how this works now. I was aware that volumes in the Destroy state get expunged by a background thread at some point; however, what tricked me here is that my "old" root disk no longer showed up in the Storage tab of the GUI. When I looked in the volumes table, though, I saw that that disk was in the Destroy state. I speed up the frequency of the clean-up background thread to run once every minute and I saw the old root disk got put into the Expunged state (as you'd expect, it was no longer present in the SR). On Wed, Mar 19, 2014 at 7:06 PM, Mike Tutkowski < mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com>> wrote: Yeah, usually "reset" (for hypervisors) means "shut down the VM and re-start it." On Wed, Mar 19, 2014 at 6:22 PM, Marcus <shadow...@gmail.com<mailto:shadow...@gmail.com>> wrote: +1 to reset being a bad verb for this. It's too late now, however. On Wed, Mar 19, 2014 at 6:22 PM, Marcus <shadow...@gmail.com<mailto:shadow...@gmail.com>> wrote: The storage gets marked as 'Destroy' state. Then it goes to 'Expunging' when the storage cleanup interval occurs. I've actually thought about leveraging that for data disks, the current delete data disk immediately cleans up the disk, when we could create an api call that just moves the data disk to destroy state. Then there'd actually be room for an 'undo' operation where the state could be moved back to Ready, so long as the cleanup hasn't occurred. On Wed, Mar 19, 2014 at 4:43 PM, Nitin Mehta <nitin.me...@citrix.com<mailto:nitin.me...@citrix.com>> wrote: Please feel free to open a documentation bug on JIRA if the info doesn't exist. On 19/03/14 3:16 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com>> wrote: Thanks for that background-cleanup info. I was not aware of that. I'll probably take a look into it and see how that works. On Wed, Mar 19, 2014 at 4:13 PM, Alena Prokharchyk < alena.prokharc...@citrix.com<mailto:alena.prokharc...@citrix.com>> wrote: CS destroys the Root volume in CS DB, then its up to the storage pool cleanup task to clean it up on the backend. This is a background task running every storage.cleanup.interval seconds. For how long do you see the volume being present on the SR? On 3/19/14, 3:03 PM, "Mike Tutkowski" <mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com> wrote: OK, sounds good; however, if this is desired behavior, does anyone know why we abandon the old root disk in the XenServer SR? It seems that CloudStack "forgets" about it and it just stays in the SR taking up space. Do people think it should be deleted? On Wed, Mar 19, 2014 at 3:49 PM, Nitin Mehta < nitin.me...@citrix.com<mailto:nitin.me...@citrix.com>> wrote: I think that's what it is supposed to do. It discards the old root disk and creates a fresh root disk for the vm and in case an optional field template id is passed in the root disk is created from this new template id. The api name is restoreVirtualMachine. Please check that the UI is internally invoking this api Thanks, -Nitin On 19/03/14 1:55 PM, "Mike Tutkowski" < mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com>> wrote: Hi, I noticed today while running through some test cases for 4.4 that resetting a VM does not work as expected. Instead of the typical stop and re-start behavior where the VM is booted back up using the same root disk, the VM gets a new root disk when it is booted back up. Can anyone confirm this finding for me with his or her setup? Thanks! -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com> o: 303.746.7302 Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play> *(tm)* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com> o: 303.746.7302 Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play> *(tm)* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com> o: 303.746.7302 Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play> *(tm)* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com> o: 303.746.7302 Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play> *(tm)* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com> o: 303.746.7302 Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play> *(tm)* -- *Mike Tutkowski* *Senior CloudStack Developer, SolidFire Inc.* e: mike.tutkow...@solidfire.com<mailto:mike.tutkow...@solidfire.com> o: 303.746.7302 Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play> *(tm)* Stratosec<http://stratosec.co/> - Compliance as a Service o: 415.315.9385 @johnlkinsella<http://twitter.com/johnlkinsella>