On Tue, Jun 11, 2013 at 3:52 AM, Pawit Pornkitprasan <p.pa...@gmail.com> wrote:
> Hi,
>
> I am implementing PCI-Passthrough to use with CloudStack for use with
> high-performance networking (10 Gigabit Ethernet/Infiniband).
>
> The current design is to attach a PCI ID (from lspci) to a compute
> offering. (Not a network offering since from CloudStack’s point of view,
> the pass through device has nothing to do with network and may as well be
> used for other things.) A host tag can be used to limit deployment to
> machines with the required PCI device.
>
> Then, when starting the virtual machine, the PCI ID is passed into
> VirtualMachineTO to the agent (currently using KVM) and the agent creates a
> corresponding <hostdev> (
> http://libvirt.org/guide/html/Application_Development_Guide-Device_Config-PCI_Pass.html)
> tag and then libvirt will handle the rest.
>
> For allocation, the current idea is to use CloudStack’s capacity system (at
> the same place where allocation of CPU and RAM is determined) to limit 1
> PCI-Passthrough VM per physical host.
>
> The current design has many limitations such as:
>
>    - One physical host can only have 1 VM with PCI-Passthrough, even if
>    many PCI-cards with equivalent functions are available
>    - The PCI ID is fixed inside the compute offering, so all machines have
>    to be homogeneous and have the same PCI ID for the device.
>
> The initial implementation is working. Any suggestions and comments are
> welcomed.
>
> Thank you,
> Pawit

This looks like a compelling idea, though I am sure not limited to
just networking (think GPU passthrough).
How are things like live migration affected? Are you making planner
changes to deal with the limiting factor of a single PCI-passthrough
VM being available per host?
What's the level of effort to extend this to work with VMware
DirectPath I/O and PCI passthrough on XenServer?

--David

Reply via email to