Also, the kernel drivers have no concept of passing VF messages to upstream 
"decision making” (or policy enforcement) software like VFd.

On Jan 11, 2017, at 9:49 AM, Kaustubh Joshi 
<kaust...@research.att.com<mailto:kaust...@research.att.com>> wrote:

When Alex from our team started working on Niantic last year, the following 
were the list of gaps in the kernel drivers we had a need to fill:

Direct traffic to VF based on more than one outer VLAN tags
Optionally strip on ingress (to PF) and insert on egress VLAN tag
Disable/enable MAC and VLAN anti spoofing separately
Mirror traffic from one VF to the other
Enable/Disable local switching per VF
Collect statistics per VF pkts/octets in/out
Enable/disable Mcast/unknown unicast per VF
Manage up to 8 TC per VF with one strict priority queue
Manage per VF per TC bandwidth allocations
Manage LACP status visibility to the VFs (for NIC teaming using SRIOV)

Most of these are VF management functions, and there is no standardized way to 
do VF management in the kernel drivers. Besides, most of the use-cases around 
SRIOV need DPDK in the VF anyway (so the target communities are aligned) and 
the PF DPDK driver for ixgbe already existed, so it made sense to add them 
there - no forking of the PF driver was involved and there is no additional 
duplicate code.

Cheers

KJ


On Jan 11, 2017, at 6:03 AM, Vincent Jardin 
<vincent.jar...@6wind.com<mailto:vincent.jar...@6wind.com>> wrote:

Please can you list the gaps of the Kernel API?

Thank you,
Vincent


Le 11 janvier 2017 3:59:45 AM "JOSHI, KAUSTUBH  (KAUSTUBH)" 
<kaust...@research.att.com<mailto:kaust...@research.att.com>> a écrit :

Hi Vincent,

Greetings! Jumping into this debate a bit late, but let me share our point of 
view based on how we are using this code within AT&T for our NFV cloud.

Actually, we first started with trying to do the configuration within the 
kernel drivers as you suggest, but quickly realized that besides the practical 
problem of kernel upstreaming being a much more arduous road (which can be 
overcome), the bigger problem was that there is no standardization in the 
configuration interfaces for the NICs in the kernel community. So different 
drivers do things differently and expose different settings, and no forum 
exists to drive towards such standardization. This was leading to vendors have 
to maintain patched versions of drivers for doing PF configuration, which is 
not a desirable situation.

So, to build a portable (to multiple NICs) SRIOV VF manager like VFd, DPDK 
seemed like a good a forum with some hope for driving towards a standard set of 
interfaces and without having to worry about a lot of legacy baggage and old 
hardware. Especially since DPDK already takes on the role of configuring NICs 
for the data plane functions anyway - both PF and VF drivers will have to be 
included for data plane usage anyway - we viewed that adding VF config options 
will not cause any forking, but simply flush out the DPDK drivers and their 
interfaces to be more complete. These APIs could be optional, so new vendors 
aren’t obligated to add them.

Furthermore, allowing VF config using the DPDK PF driver also has the side 
benefit of allowing a complete SRIOV system (both VF and PF) to be built 
entirely with DPDK, also making version alignment easier.

We started with Niantic, which already had PF and VF drivers, and things have 
worked out very well with it. However, we would like VFd to be a multi-NIC 
vendor agnostic VF management tool, which is why we’ve been asking for making 
the PF config APIs richer.

Regards

KJ


On Jan 10, 2017, at 3:23 PM, Vincent Jardin 
<vincent.jar...@6wind.com<mailto:vincent.jar...@6wind.com>> wrote:

Nope. First one needs to assess if DPDK should be intensively used to become a 
PF knowing Linux can do the jobs. Linux kernel community does not like the 
forking of Kernel drivers, I tend to agree that we should not keep duplicating 
options that can be solved with the Linux kernel.

Best regards,
Vincent







Reply via email to