Hi Vladislav, From: Vladislav Zolotarov [mailto:vl...@cloudius-systems.com] Sent: Friday, December 26, 2014 1:16 PM To: Ouyang, Changchun Cc: dev at dpdk.org Subject: RE: [dpdk-dev] [PATCH v3 0/6] Enable VF RSS for Niantic
On Dec 26, 2014 4:41 AM, "Ouyang, Changchun" <changchun.ouyang at intel.com<mailto:changchun.ouyang at intel.com>> wrote: > > > > > -----Original Message----- > > From: Vlad Zolotarov [mailto:vladz at cloudius-systems.com<mailto:vladz at > > cloudius-systems.com>] > > Sent: Thursday, December 25, 2014 8:46 PM > > To: Ouyang, Changchun; dev at dpdk.org<mailto:dev at dpdk.org> > > Subject: Re: [dpdk-dev] [PATCH v3 0/6] Enable VF RSS for Niantic > > > > > > On 12/25/14 04:26, Ouyang, Changchun wrote: > > > Hi, > > > > > >> -----Original Message----- > > >> From: Vlad Zolotarov [mailto:vladz at cloudius-systems.com<mailto:vladz > > >> at cloudius-systems.com>] > > >> Sent: Wednesday, December 24, 2014 6:49 PM > > >> To: Ouyang, Changchun;dev at dpdk.org<mailto:Changchun%3Bdev at dpdk.org> > > >> Subject: Re: [dpdk-dev] [PATCH v3 0/6] Enable VF RSS for Niantic > > >> > > >> > > >> On 12/24/14 07:22, Ouyang Changchun wrote: > > >>> This patch enables VF RSS for Niantic, which allow each VF having at > > >>> most 4 > > >> queues. > > >>> The actual queue number per VF depends on the total number of pool, > > >>> which is determined by the total number of VF at PF initialization > > >>> stage and the number of queue specified in config: > > >>> 1) If the number of VF is in the range from 1 to 32 and the number > > >>> of rxq is 4('--rxq 4' in testpmd), then there is totally 32 > > >>> pools(ETH_32_POOLS), and each VF have 4 queues; > > >>> > > >>> 2)If the number of VF is in the range from 33 to 64 and the number > > >>> of rxq is 2('--rxq 2' in testpmd), then there is totally 64 > > >>> pools(ETH_64_POOLS), and each VF have 2 queues; > > >>> > > >>> On host, to enable VF RSS functionality, rx mq mode should be set as > > >>> ETH_MQ_RX_VMDQ_RSS or ETH_MQ_RX_RSS mode, and SRIOV mode > > >> should be activated(max_vfs >= 1). > > >>> It also needs config VF RSS information like hash function, RSS key, > > >>> RSS key > > >> length. > > >>> The limitation for Niantic VF RSS is: > > >>> the hash and key are shared among PF and all VF, the RETA table with > > >>> 128 entries are also shared among PF and all VF. So it is not good > > >>> idea to query the hash and reta content per VF on guest, instead, it > > >>> makes > > >> sense to query them on host(PF). > > >>> v3 change: > > >>> - More cleanup; > > >> This series is still missing the appropriate patches in the > > >> rte_eth_dev_info_get() flow to return a reta_size for a VF device; > > >> and to > > >> rte_eth_dev_rss_reta_query() in the context of a VF device (I haven't > > >> noticed the initialization of a > > >> dev->dev_ops->reta_query for the VF device in this series). > > >> > > >> Without these code bits it's impossible to work with the VF devices > > >> in the RSS context the same way we work with the PF devices. It means > > >> that we'll have to do some special branching to handle the VF device > > >> and this voids the whole meaning of the framework which in turn is very > > unfortunate. > > >> > > > Again pls try to query reta content on pf/host, this is due to hw > > > limitation, > > > > Again, I'm using DPDK from inside a Guest OS on Amazon Cloud. I have no > > and will never have an access to the PF due to obvious reasons thus I can't > > query it. > > Which HW limitations u are referring? It's a clear software issue - the > > VF-PF > > channel protocol should have a message to negotiate it but it looks like > > Intel > > hasn't cared to implemented it yet unless I miss something here. > > The problems don't end with the RETA. What about the hash key, which is > > also shared? There isn't an appropriate message to query it either. This is > > not > > a pure DPDK issue - it's a general issue with Linux 82599 drivers. > > > On this point, I agree with you, it is not pure dpdk issue, > Dpdk use share codes(partly come from Linux 82599, and logically similar with > it) from another division, the hw is also implemented by that division. > If we are talking about hw limitation and modify share code, linux 82599 > driver to fully enable vf rss all > Functionality in Niantic, just talking about it in dpdk.org<http://dpdk.org> > mailing list may not be enough. > Personally I think maybe you could find another effective and efficient way > to raise your further request. You r right. Apparently at this point you just can't "meet my demands" because there's no support for these queries in the PF driver. ;) So, let's move on. I'll send the appropriate patch on the netdev list and we'll see how it goes from there. Changchun: yes, please go forward to do it. > > > > It don't affect any functionality, just the querying is special. > > > > How can u call the fact that some of DPDK API functionality is missing as > > "it > > don't affect any functionality"? Of course it affects it. Just like I said > > it may > > cause us treat the VF in the special way while there is not any real reason > > to > > do so. > I mean each vf could use multiple queues and do rss for packets, > This functionality works well. > If you could bypass the querying issue or handling it specially, you still > could use the vf rss. > > > > Before this patch, customer often was notified Niantic can't support > > > vf rss, But with lots of experiments and find that it still has limited > > > vf rss > > functionality. > > > Even on that, linux ixgbe driver has at most 2 queues per vf, But the > > > dpdk could enable 4 queues per vf. > > > In summary, dpdk could support vf rss on Niantic with at most 4 queues > > > per vf, but the querying of reta is very limited due to the HW limitation. > > > > Limited? I meant missing, right? > We are meaning same thing in different way, > "very limited" mean it still could query it on pf, but missing(or could not > do it) on vf/guest Again, agreed. I just so pissed that we have to rewrite so nice design just because this simple piece of software is missing in the PF. > > > > > Hope you are on the same page now. We are now... ? Changchun: very glad to see this! ? Thanks and regards, Changchun