Hi Allain, On Wed, Mar 29, 2017 at 12:29:59PM +0000, Legacy, Allain wrote: > > -----Original Message----- > > From: Nélio Laranjeiro [mailto:nelio.laranje...@6wind.com] > > Sent: Wednesday, March 29, 2017 5:45 AM > > <...> > > > Almost... the only difference is that the ETH pattern also checks for > > type=0x8100 > > > > Ethernet type was not supported in DPDK 17.02, it was submitted later in > > march [1]. Did you embed the patch in your test? > > No, but I am using the default eth mask (rte_flow_item_eth_mask) so it > looks like it is accepting any ether type even though I set the vlan > type along with the src+dst.
Right, > > > > Can you compile in debug mode (by setting > > > > CONFIG_RTE_LIBRTE_MLX5_DEBUG to "y")? Then you should have as > > many > > > > print for the creation rules than the destroyed ones. > > > > > > I can give that a try. > > I ran with debug logs enabled and there are no logs coming from the > PMD that indicate an error. All create and destroy calls report a > successful result. > > I modified my test slightly yesterday to try to determine what is > happening. What I found that if I use a smaller number of flows the > problem does not happen, but as soon as I use 256 flows or greater the > problem manifests itself. What I mean is: > > test 1: > 1) start 16 flows (16 unique src MAC addresses sending to 16 unique dst > MAC addresses) > 2) create flow rules > 3) check that all subsequent packets are marked correctly > 4) stop traffic > 5) destroy all flow rules > 6) wait 15 seconds > 7) repeat from (1) for 4 iterations. > > test 2: > same as test1 but with 32 flows > > test 3: > same as test1 but with 64 flows > > test 4: > same as test1 but with 128 flows > > test 5: > same as test1 but with 256 flows (this is where the problem starts > happening)... it could very well be somewhere closer to 128 but I > am stepping up by powers of 2 so this is the first occurrence. > > I also modified my test to destroy flow rules in the opposite order > that I created them just in case ordering is an issue but that had no > effect. I found an issue on the id retrieval while receiving an high rate of the same flow [1]. You may face the same issue. Can you verify with the patch? Thanks, [1] http://dpdk.org/dev/patchwork/patch/22897/ -- Nélio Laranjeiro 6WIND