(2014/04/21 18:10), Oleg Bondarev wrote: On Fri, Apr 18, 2014 at 9:10 PM, Kyle Mestery <mest...@noironetworks.com<mailto:mest...@noironetworks.com>> wrote: On Fri, Apr 18, 2014 at 8:52 AM, Oleg Bondarev <obonda...@mirantis.com<mailto:obonda...@mirantis.com>> wrote: > Hi all, > > While investigating possible options for Nova-network to Neutron migration > I faced a couple of issues with libvirt. > One of the key requirements for the migration is that instances should stay > running and don't need restarting. In order to meet this requirement we need > to either attach new nic to the instance or update existing one to plug it > to the Neutron network. > Thanks for looking into this Oleg! I just wanted to mention that if we're trying to plug a new NIC into the VM, this will likely require modifications in the guest. The new NIC will likely have a new PCI ID, MAC, etc., and thus the guest would have to switch to this. Therefor, I think it may be better to try and move the existing NIC from a nova network onto a neutron network.
Yeah, I agree that modifying the existing NIC is the preferred way. Thanks for investigating ways of migrating from nova-network to neutron. I think we need to define the levels of the migration. We can't satisfy all requirements at the same time, so we need to determine/clarify some reasonable limitations on the migration. - datapath downtime - no downtime - a small period of downtime - rebooting an instnace - API and management plane downtime - Combination of the above I think modifying the existing NIC requires plug and unplug an device in some way (plug/unplug an network interface to VM? move a tap device from nova-network to neutron bridge?). It leads to a small downtime. On the other hand, adding a new interface requires a geust to deal with network migration (though it can potentially provide no downtime migration as an infra level). IMO a small downtime can be accepted in cloud use cases and it is a good start line. Thanks, Akihiro > So what I've discovered is that attaching a new network device is only > applied > on the instance after reboot although VIR_DOMAIN_AFFECT_LIVE flag is passed > to > the libvirt call attachDeviceFlags(): > https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1412 > Is that expected? Are there any other options to apply new nic without > reboot? > > I also tried to update existing nic of an instance by using libvirt > updateDeviceFlags() call, > but it fails with the following: > 'this function is not supported by the connection driver: cannot modify > network device configuration' > libvirt API spec (http://libvirt.org/hvsupport.html) shows that 0.8.0 as > minimal > qemu version for the virDomainUpdateDeviceFlags call, kvm --version on my > setup shows > 'QEMU emulator version 1.0 (qemu-kvm-1.0)' > Could someone please point what am I missing here? > What does "libvirtd -V" show for the libvirt version? On my Fedora 20 setup, I see the following: [kmestery@fedora-mac neutron]$ libvirtd -V libvirtd (libvirt) 1.1.3.4 [kmestery@fedora-mac neutron]$ On my Ubuntu 12.04 it shows: $ libvirtd --version libvirtd (libvirt) 0.9.8 Thanks, Kyle > Any help on the above is much appreciated! > > Thanks, > Oleg > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev