Hi, I didn't follow DPDK development to close lately. Was those problem fixed already?
On Thu, Mar 05, 2015 at 06:56:14AM +0000, Zhang, Helin wrote: > > > > -----Original Message----- > > From: Gleb Natapov [mailto:gleb at cloudius-systems.com] > > Sent: Thursday, March 5, 2015 2:39 PM > > To: Zhang, Helin > > Cc: dev at dpdk.org > > Subject: Re: i40e and RSS woes > > > > On Thu, Mar 05, 2015 at 05:56:27AM +0000, Zhang, Helin wrote: > > > Hi Gleb > > > > > > Sorry for late! I am struggling on my tasks for the following DPDK release > > these days. > > > > > > > -----Original Message----- > > > > From: Gleb Natapov [mailto:gleb at cloudius-systems.com] > > > > Sent: Monday, March 2, 2015 8:56 PM > > > > To: dev at dpdk.org > > > > Cc: Zhang, Helin > > > > Subject: Re: i40e and RSS woes > > > > > > > > Ping. > > > > > > > > On Thu, Feb 19, 2015 at 04:50:10PM +0200, Gleb Natapov wrote: > > > > > CCing i40e driver author in a hope to get an answer. > > > > > > > > > > On Mon, Feb 16, 2015 at 03:36:54PM +0200, Gleb Natapov wrote: > > > > > > I have an application that works reasonably well with ixgbe > > > > > > driver, but when I try to use it with i40e I encounter various RSS > > > > > > related > > issues. > > > > > > > > > > > > First one is that for some reason i40e, when it builds default > > > > > > reta table, round down number of queues to power of two. Why is > > > > > > this? If > > > It seems because of i40e queue configuration. We will check it more > > > and see if it can be changed or improved later. > > > > > Thanks, as I said below when I configure reta by myself everything work as > > expected - traffic is received on all queues, so I am curious if in some > > scenarios > > my code can break. > > > > > > > > I configure reta by my own using all of the queues everything > > > > > > seams to be working. To add insult to injury I do not get any > > > > > > errors during configuration some queues just do not receive any > > > > > > traffic. > > > > > > > > > > > > The second problem is that for some reason i40e does not use 40 > > > > > > byte toeplitz hash key like any other driver, but it expects the > > > > > > key to be 52 bytes. And it would have being fine (if we ignore > > > > > > the fact that it contradicts MS spec), but how my high level > > > > > > code suppose to know > > > > that? > > > Actually a rss_key_len was introduced in struct rte_eth_rss_conf > > > recently. So the length should be indicated clearly. But I found the > > > annotations of that structure should have been reworked. I will try to > > > rework > > it with clear descriptions. > > > > > I saw rss_key_len of course, my question is how my code suppose to know > > what value to set it to? Why required key length is not part of a device > > capability query (or is it and I missed it)? The only way I found to get > > key length > > is to quire device for a key, and check rss_key_len. If it is zero then key > > is 40 > > bytes, otherwise whatever rss_key_len says. This method is more of a hack > > then proper way to do it. > I think it was missed. I will add it soon later. > > > > > > > > > And again, device configuration does not fail when wrong key > > > > > > length is provided, it just uses some other key. Guys this kind > > > > > > of error handling is completely unacceptable. > > > If less length of key is provided, it will not be used at all, the > > > default key will be > > used. > > > So there is no issue as you said. But we need to add more clear > > > description for the structure of rte_eth_rss_conf. > > > > > What you've said above is exactly the issue! My code does not work if a key > > used by HW is not the same as was set by application, but since I get no > > error > > when my setting is ignored the is not way for me to know that my application > > will not work (short of querying key back and comparing which is again a > > hack). > > Device configuration should fail if it cannot apply my settings. > After I checked the code, different PMD may have different implementation. > Returning with an error might be the best way for all PMDs. I will unify it > later. > > Really good findings and suggestions from you! Thank you very much! > -- Gleb.