This page [0] is not up to date but you can use it for configuration examples
or at least that's what I've done. I started this process in Liberty and then
migrated to Mitaka and while I have successfully passed in a device to a VM
from Nova, I have not tried to initialize or use that device yet since I don't
have any EFI images yet. In Liberty I found that Nova comes with all of the
functionality already to do pci passthrough given you have your Hypervisor
configured correctly. Some of it has changed in Mitaka like needing to set
type on the alias and including support to boot EFI images but in general it is
close. I think the filter is already included in the list of available filters
so you would just have to add it to your default filter list. I'm not sure you
would even have to setup host aggregates, just new flavors that define what
aliases that flavor is going to allocate. My assumption has been that
scheduling other VMs on a GPU node might starve the GPU flavor from bei
ng able to launch on that node but I have not tried it yet.
Here's some example configuration:
pci_alias={"name": "Tesla_K80", "vendor_id": "10de", "product_id": "102d",
"device_type": "type-PCI"}
pci_passthrough_whitelist={"vendor_id": "10de", "product_id": "102d"}
The API parts of that webpage currently seem to be integrated in the Nova
codebase but not enabled. You can use the Nova database itself to check for
the pci devices in the pci_devices table.
You will also have to enable iommu on your hypervisors to have libvirt expose
the capability to Nova for PCI passthrough. I use Centos 7 and had to set
'iommu=pt intel_iommu=on' for my kernel parameters. Along with this, you'll
have to start using EFI for your VMs by installing OVMF on your Hypervisors and
configuring your images appropriately. I don't have a link handy for this but
the gist is that Legacy bootloaders have a much more complicated process to
initialize the devices being passed to the VM where EFI is much easier.
[0]: https://wiki.openstack.org/wiki/Pci_passthrough
Peter Nordquist
-----Original Message-----
From: Jonathan Proulx [mailto:[email protected]]
Sent: Monday, May 09, 2016 13:13
To: [email protected]
Subject: [Openstack-operators] How are folks providing GPU instance types?
Hi All,
Having trouble finding any current info on best practices for providing GPU
instances. Most of what Google is feeding me is Grizzly or older.
I'm currently on Kilo (Mitaka upgrade planned in 60-90 days) with
Ubuntu14.04 and kvm hypervisor. Looking to add some NVidia GPUs but haven't
invested in hardware yet. Presumably I'd be using host aggregates and instance
metadata to separate these out from the general pool so not tied to kvm though
it would be nice to have image compatibility across GPU and nonGPU instance
types (this is currently 'raw' images in ceph rbd).
Any pointers to good info online or general advice as I travel down this path?
I suppose particularly urgent is any hardware caveats I need ot be aware of
before I sink cash into the wrong thing (I'm presuming that the k10,k20,k40,k80
are all equivalent in this regard).
-Jon
--
_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators