On Fri, 31 Mar 2017 09:41:39 +0100, Bruce Richardson 
<bruce.richard...@intel.com> wrote:
> On Fri, Mar 31, 2017 at 10:26:10AM +0200, Olivier Matz wrote:
> > Hi,
> > 
> > On Fri, 31 Mar 2017 01:00:49 +0000, "Ananyev, Konstantin" 
> > <konstantin.anan...@intel.com> wrote:  
> > > > >
> > > > >
> > > > >    
> > > > > > -----Original Message-----
> > > > > > From: Richardson, Bruce
> > > > > > Sent: Thursday, March 30, 2017 1:23 PM
> > > > > > To: Olivier Matz <olivier.m...@6wind.com>
> > > > > > Cc: dev@dpdk.org; Ananyev, Konstantin 
> > > > > > <konstantin.anan...@intel.com>; m...@smartsharesystems.com; 
> > > > > > Chilikin, Andrey
> > > > > > <andrey.chili...@intel.com>; jblu...@infradead.org; 
> > > > > > nelio.laranje...@6wind.com; arybche...@solarflare.com
> > > > > > Subject: Re: [dpdk-dev] [PATCH 0/9] mbuf: structure reorganization
> > > > > >
> > > > > > On Thu, Mar 30, 2017 at 02:02:36PM +0200, Olivier Matz wrote:    
> > > > > > > On Thu, 30 Mar 2017 10:31:08 +0100, Bruce Richardson 
> > > > > > > <bruce.richard...@intel.com> wrote:    
> > > > > > > > On Wed, Mar 29, 2017 at 09:09:23PM +0100, Bruce Richardson 
> > > > > > > > wrote:    
> > > > > > > > > On Wed, Mar 29, 2017 at 05:56:29PM +0200, Olivier Matz wrote: 
> > > > > > > > >    
> > > > > > > > > > Hi,
> > > > > > > > > >
> > > > > > > > > > Does anyone have any other comment on this series?
> > > > > > > > > > Can it be applied?
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > Olivier
> > > > > > > > > >    
> > > > > > > > >
> > > > > > > > > I assume all driver maintainers have done performance 
> > > > > > > > > analysis to check
> > > > > > > > > for regressions. Perhaps they can confirm this is the case.
> > > > > > > > >
> > > > > > > > >       /Bruce    
> > > > > > > > > >    
> > > > > > > > In the absence, of anyone else reporting performance numbers 
> > > > > > > > with this
> > > > > > > > patchset, I ran a single-thread testpmd test using 2 x 40G 
> > > > > > > > ports (i40e)
> > > > > > > > driver. With RX & TX descriptor ring sizes of 512 or above, I'm 
> > > > > > > > seeing a
> > > > > > > > fairly noticable performance drop. I still need to dig in more, 
> > > > > > > > e.g. do
> > > > > > > > an RFC2544 zero-loss test, and also bisect the patchset to see 
> > > > > > > > what
> > > > > > > > parts may be causing the problem.
> > > > > > > >
> > > > > > > > Has anyone else tried any other drivers or systems to see what 
> > > > > > > > the perf
> > > > > > > > impact of this set may be?    
> > > > > > >
> > > > > > > I did, of course. I didn't see any noticeable performance drop on
> > > > > > > ixgbe (4 NICs, one port per NIC, 1 core). I can replay the test 
> > > > > > > with
> > > > > > > current version.
> > > > > > >    
> > > > > > I had no doubt you did some perf testing! :-)
> > > > > >
> > > > > > Perhaps the regression I see is limited to i40e driver. I've 
> > > > > > confirmed I
> > > > > > still see it with that driver in zero-loss tests, so next step is 
> > > > > > to try
> > > > > > and localise what change in the patchset is causing it.
> > > > > >
> > > > > > Ideally, though, I think we should see acks or other comments from
> > > > > > driver maintainers at least confirming that they have tested. You 
> > > > > > cannot
> > > > > > be held responsible for testing every DPDK driver before you submit 
> > > > > > work
> > > > > > like this.
> > > > > >    
> > > > >
> > > > > Unfortunately  I also see a regression.
> > > > > Did a quick flood test on 2.8 GHZ IVB with 4x10Gb.    
> > > > 
> > > > Sorry, forgot to mention - it is on ixgbe.
> > > > So it doesn't look like i40e specific.
> > > >     
> > > > > Observed a drop even with default testpmd RXD/TXD numbers (128/512):
> > > > > from 50.8 Mpps down to 47.8 Mpps.
> > > > > From what I am seeing the particular patch that causing it:
> > > > > [dpdk-dev,3/9] mbuf: set mbuf fields while in pool
> > > > >
> > > > > cc version 5.3.1 20160406 (Red Hat 5.3.1-6) (GCC)
> > > > > cmdline:
> > > > > ./dpdk.org-1705-mbuf1/x86_64-native-linuxapp-gcc/app/testpmd  
> > > > > --lcores='7,8'  -n 4 --socket-mem='1024,0'  -w 04:00.1 -w 07:00.1 -w
> > > > > 0b:00.1 -w 0e:00.1 -- -i
> > > > >    
> > > 
> > > After applying the patch below got nearly original numbers (though not 
> > > quite) on my box.
> > > dpdk.org mainline:           50.8
> > > with Olivier patch:           47.8
> > > with patch below:            50.4
> > > What I tried to do in it - avoid unnecessary updates of mbuf inside 
> > > rte_pktmbuf_prefree_seg().
> > > For one segment per packet it seems to help.
> > > Though so far I didn't try it on i40e and didn't do any testing for 
> > > multi-seg scenario.
> > > Konstantin  
> > 
> > I replayed my tests, and I can also see a performance loss with 1c/1t
> > (ixgbe), not in the same magnitude however. Here is what I have in MPPS:
> > 
> > 1c/1t   1c/2t
> > 53.3    58.7     current
> > 52.1    58.8     original patchset
> > 53.3    58.8     removed patches 3 and 9
> > 53.1    58.7     with konstantin's patch
> > 
> > So we have 2 options here:
> > 
> > 1/ integrate Konstantin's patch in the patchset (thank you, by the way)
> > 2/ remove patch 3, and keep it for later until we have something that
> >    really no impact
> > 
> > I'd prefer 1/, knowing that the difference is really small in terms
> > of cycles per packet.
> > 
> >   
> 1 is certainly the more attractive option. However, I think we can
> afford to spend a little more time looking at this before we decide.
> I'll try and check out the perf numbers I get with i40e with
> Konstantin's patch today. We also need to double check the other
> possible issues he reported in his other emails. While I don't want this
> patchset held up for a long time, I think an extra 24/48 hours is
> probably needed on it.
> 

Yes, now that we have the "test momentum", try not to loose it ;)

I'm guilty to have missed the performance loss, but honnestly,
I'm a bit sad that nobody tried to this patchset before (it
is available for more than 2 months), knowing this is probably one of
the most critical part of dpdk. I think we need to be better next
time.

Anyway, thank you for your test and feedback now.

Olivier

Reply via email to