Nice idea. I say Œship it¹ The code was legacy and we just made sure it worked. Nice proposal
On 7/10/14, 6:52 PM, "Matthew Booth" <mbo...@redhat.com> wrote: >Currently we create a rescue instance by creating a new VM with the >original instance's image, then adding the original instance's first >disk to it, and booting. This means we have 2 VMs, which we need to be >careful of when cleaning up. Also when suspending, and probably other >edge cases. We also don't support: > >* Rescue images other than the instance's creation image >* Rescue of an instance which wasn't created from an image >* Access to cinder volumes from a rescue instance > >I've created a dirty hack which, instead of creating a new VM, attaches >the given rescue image to the VM and boots from it: > >https://review.openstack.org/#/c/106078/ > >It works for me. It supports all of the above, doesn't require special >handling on destroy, and works with suspend[1]. It also doesn't trigger >the spurious warning message about unknown VMs on the cluster which, >while unimportant in itself, is an example of an edge case caused by >having 2 VMs. > >Does this seem a reasonable way to go? It would be dependent on a >refactoring of the image cache code so we could cache the rescue image. > >Matt > >[1] If suspend of a rescued image wasn't broken at the api level, >anyway. I have a patch for that: https://review.openstack.org/#/c/106082/ >-- >Matthew Booth >Red Hat Engineering, Virtualisation Team > >Phone: +442070094448 (UK) >GPG ID: D33C3490 >GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 > >_______________________________________________ >OpenStack-dev mailing list >OpenStack-dev@lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev