Sundar- > On an unrelated note, thanks for the > pointer to the GPU spec > (https://review.openstack.org/#/c/579359/10/doc/source/specs/rocky/device-passthrough.rst). > I will review that.
Thanks. Please note that this is for nova-powervm, PowerVM's *out-of-tree* compute driver. We hope to bring this into the in-tree driver eventually (unless we skip straight to the cyborg model :) but it should give a good idea of some of the requirements and use cases we're looking to support. > Fair enough. We had discussed that too. The Cyborg drivers can also > invoke REST APIs etc. for Power. Ack. > Agreed. So, we could say: > - The plugins do the instance half. They are hypervisor-specific and > platform-specific. (The term 'platform' subsumes both the architecture > (Power, x86) and the server/system type.) They are invoked by os-acc. > - The drivers do the device half, device discovery/enumeration and > anything not explicitly assigned to plugins. They contain > device-specific and platform-specific code. They are invoked by Cyborg > agent and os-acc. Sounds good. > Are you ok with the workflow in > https://docs.google.com/drawings/d/1cX06edia_Pr7P5nOB08VsSMsgznyrz4Yy2u8nb596sU/edit?usp=sharing > ? Yes (but see below). >> You mean for getVAN()? > Yes -- BTW, I renamed it as prepareVANs() or prepareVAN(), because it is > not just a query as the name getVAN implies, but has side effects. Ack. >> Because AFAIK, os_vif.plug(list_of_vif_objects, >> InstanceInfo) is *not* how nova uses os-vif for plugging. > > Yes, the os-acc will invoke the plug() once per VAN. IIUC, Nova calls > Neutron once per instance for all networks, as seen in this code > sequence in nova/nova/compute/manager.py: > > _build_and_run_instance() --> _build_resources() --> > > _build_networks_for_instance() --> _allocate_network() > > The _allocate_network() actually takes a list of requested_networks, and > handles all networks for an instance [1]. > > Chasing this further down: > > _allocate_network --> _allocate_network_async() > > --> self.network_api.allocate_for_instance() > > == nova/network/rpcapi.py::allocate_for_instance() > > So, even the RPC out of Nova seems to take a list of networks [2]. Yes yes, but by the time we get to os_vif.plug(), we're doing one VIF at a time. That corresponds to what you've got in your flow diagram, so as long as that's accurate, I'm fine with it. That said, we could discuss os_acc.plug taking a list of VANs and threading out the calls to the plugin's plug() method (which takes one at a time). I think we've talked a bit about this before: the pros and cons of having the threading managed by os-acc or by the plugin. We could have the same discussion for prepareVANs() too. > [1] > https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1529 > [2] > https://github.com/openstack/nova/blob/master/nova/network/rpcapi.py#L163 >> Thanks, >> Eric >> //lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Regards, > Sundar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: [email protected]?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
