ping

From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com]
Sent: Friday, February 19, 2016 2:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] New BP for live migration with 
direct pci passthru

Hi, Ian,
Thanks a lot for your reply.

>In general, while you've applied this to networking (and it's not the first 
>time I've seen this proposal), the same technique will work with any device - 
>PF or VF, networking or other:
>- notify the VM via an accepted channel that a device is going to be 
>temporarily removed
>- remove the device
>- migrate the VM
>- notify the VM that the device is going to be returned
>- reattach the device
>Note that, in the above, I've not used said 'PF', 'VF', 'NIC' or 'qemu'.
Yes, I absolutely agree with you and sorry for my vague expression about that.
Actually, now we just attempt to support the live migration of the instance 
which directly connected to the passthru VF.
Although the devices what you mentioned all we should implement but that needs 
a step-by-step plan, I think.


>You would need to document what assumptions the guest is going to make (the 
>reason I mention this is I think it's safe to assume the device has been 
>recently reset here, but for a network device you might want to consider 
>whether the device will have the same MAC address or number of tx and rx 
>buffers, for instance).
Exactly correct, there are a lot things that should be considered, but with 
regard to VF,
many things will be easy to handle or avoid, for instance, the issuse of same 
MAC address.

And in addition to what you mentioned, for VF, I think, the most important 
thing we need to discuss
is the strategy of NIC bonding - How and When and by whom to make this bonding 
- as there are
too much risks of running afoul of something on the VM, for instance 
NetworkManager, which will cause failure of bonding presumably.
For instance:
  - prospectively make the NICs bonding when the VM was launched by an embedded 
script based on DIB?
  - or manually bond the NICs by VM administrators before the live-migration 
command executes?
  - or notify the VM to perform this bonding by openstack components while the 
live-migration command executes?
-  ...


Best regards,
Xiexs


From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Wednesday, February 17, 2016 3:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] New BP for live migration with 
direct pci passthru

In general, while you've applied this to networking (and it's not the first 
time I've seen this proposal), the same technique will work with any device - 
PF or VF, networking or other:
- notify the VM via an accepted channel that a device is going to be 
temporarily removed
- remove the device
- migrate the VM
- notify the VM that the device is going to be returned
- reattach the device
Note that, in the above, I've not used said 'PF', 'VF', 'NIC' or 'qemu'.

You would need to document what assumptions the guest is going to make (the 
reason I mention this is I think it's safe to assume the device has been 
recently reset here, but for a network device you might want to consider 
whether the device will have the same MAC address or number of tx and rx 
buffers, for instance).

The method of notification I've deliberately skipped here; you have an answer 
for qemu, qemu is not the only hypervisor in the world so this will clearly be 
variable.  A metadata server mechanism is another possibility.

Half of what you've described is one model of how the VM might choose to deal 
with that (and a suggestion that's come up before, in fact) - that's a model we 
would absolutely want Openstack to support (and I think the above is sufficient 
to support it), but we can't easily mandate how VMs behave, so from the 
Openstack perspective it's more a recommendation than anything we can code up.


On 15 February 2016 at 23:25, Xie, Xianshan 
<xi...@cn.fujitsu.com<mailto:xi...@cn.fujitsu.com>> wrote:
Hi, Fawad,


> Can you please share the link?
https://blueprints.launchpad.net/nova/+spec/direct-pci-passthrough-live-migration

Thanks in advance.


Best regards,
xiexs

From: Fawad Khaliq [mailto:fa...@plumgrid.com<mailto:fa...@plumgrid.com>]
Sent: Tuesday, February 16, 2016 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] New BP for live migration with 
direct pci passthru

On Mon, Feb 1, 2016 at 3:25 PM, Xie, Xianshan 
<xi...@cn.fujitsu.com<mailto:xi...@cn.fujitsu.com>> wrote:
Hi, all,
  I have registered a new BP about the live migration with a direct pci 
passthru device.
  Could you please help me to review it? Thanks in advance.

Can you please share the link?


The following is the details:
----------------------------------------------------------------------------------
SR-IOV has been supported for a long while, in the community's point of view,
the pci passthru with Macvtap can be live migrated possibly, but the direct pci 
passthru
seems hard to implement the migration as the passthru VF is totally controlled 
by
the VMs so that some internal states may be unknown by the hypervisor.

But we think the direct pci passthru model can also be live migrated with the
following combination of a series of technology/operation based on the enhanced
Qemu-Geust-Agent(QGA) which has already been supported by nova.
   1)Bond the direct pci passthru NIC with a virtual NIC.
     This will keep the network connectivity during the live migration.
   2)Unenslave the direct pci passthru NIC
   3)Hot-unplug the direct pci passthru NIC
   4)Live-migrate guest with the virtual NIC
   5)Hot-plug the direct pci passthru NIC on the target host
   6)Enslave the direct pci passthru NIC

And more inforation about this concept can refer to [1].
[1]https://www.kernel.org/doc/ols/2008/ols2008v2-pages-261-267.pdf
----------------------------------------------------------------------------------

Best regards,
Xiexs



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to