Public bug reported: Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload.
If we use Neutron RBAC to allow access to the binding profile, users can create big problems by modifying it after port binding when nova has stored private details in their like the specific PCI device that has been bound. Example operator deployments: * all SR-IOV direct ports are hardware offloaded with switch_dev using ovn * all SR-IOV direct ports are hardware offloaded with switch_dev using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above >From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False At the PTG it was mentioned that this was configuration that change what the API does. But I don't understand how using ovn vs ovs ml2 is any different to hardware offloaded vs not-hardware offloads direct NICs. ** Affects: neutron Importance: Undecided Status: New ** Tags: rfe -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/2013228 Title: Non-admin users unable to create SR-IOV switch dev ports Status in neutron: New Bug description: Currently only admin users can create SR-IOV direct ports that are marked as switch_dev, i.e. the ones that do OVS hardware offload. If we use Neutron RBAC to allow access to the binding profile, users can create big problems by modifying it after port binding when nova has stored private details in their like the specific PCI device that has been bound. Example operator deployments: * all SR-IOV direct ports are hardware offloaded with switch_dev using ovn * all SR-IOV direct ports are hardware offloaded with switch_dev using ovs ml2 * all SR-IOV direct ports created using legacy SR-IOV driver * (not a use case I have, but theoretically its possible) some mix of the above From and end user, the direct nic, hardware offloaded or not, by ovs ml2 or ovn, all appear the same. Its operator internal setup details on what happens. Why do we expose this to the end user in this way? Ideally the user doesn't care between these, and the neutron API is able to extract away the implementation details of getting a direct type of vnic. Moreover we don't want users to have to know how their operator has configured their system, be it ovn or ovs or sriov. They just want the direct NIC that get them RDMA within their VM for the requested nova flavor of their instance. This could be fixed by having ovn drivers and ovs ml2 drivers respect a configuration like this, for the non mixed case: bind_direct_nics_as_switch_dev = True/False At the PTG it was mentioned that this was configuration that change what the API does. But I don't understand how using ovn vs ovs ml2 is any different to hardware offloaded vs not-hardware offloads direct NICs. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/2013228/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp