On 24 February 2014 22:14, Chris Friesen <chris.frie...@windriver.com> wrote: > I'm looking at the live migration rollback code and I'm a bit confused. > > When setting up a live migration we unconditionally run > ComputeManager.pre_live_migration() on the destination host to do various > things including setting up networks on the host. > > If something goes wrong with the live migration in > ComputeManager._rollback_live_migration() we will only call > self.compute_rpcapi.rollback_live_migration_at_destination() if we're doing > block migration or volume-backed migration that isn't shared storage. > > However, looking at ComputeManager.rollback_live_migration_at_destination(), > I also see it cleaning up networking as well as block device. > > What happens if we have a shared-storage instance that we try to migrate and > fail and end up rolling back? Are we going to end up with messed-up > networking on the destination host because we never actually cleaned it up?
I had some WIP code up to clean that up, as part as the move to conductor, its massively confusing right now. Looks like a bug to me. I suspect the real issue is that some parts of: self.driver.rollback_live_migration_at_destination(ontext, instance, network_info, block_device_info) Need more information about if there is shared storage being used or not. John _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev