On Tue, 2019-04-02 at 09:06 +0200, Thomas Monjalon wrote:
> 02/04/2019 03:03, Jerin Jacob Kollanukkaran:
> > On Mon, 2019-04-01 at 22:53 +0200, Thomas Monjalon wrote:
> > > 01/04/2019 22:25, Ferruh Yigit:
> > > > On 3/31/2019 2:14 PM, Pavan Nikhilesh Bhagavatula wrote:
> > > > > From: Pavan Nikhilesh <pbhagavat...@marvell.com>
> > > > > 
> > > > > Optimize testpmd txonly mode by
> > > > > 1. Moving per packet ethernet header copy above the loop.
> > > > > 2. Use bulk ops for allocating segments instead of having a
> > > > > inner
> > > > > loop
> > > > > for every segment.
> > > > > 
> > > > > Also, move the packet prepare logic into a separate function
> > > > > so
> > > > > that it
> > > > > can be reused later.
> > > > > 
> > > > > Signed-off-by: Pavan Nikhilesh <pbhagavat...@marvell.com>
> > > > > ---
> > > > >  v5 Changes
> > > > >  - Remove unnecessary change to struct rte_port *txp
> > > > > (movement).
> > > > > (Bernard)
> > > > > 
> > > > >  v4 Changes:
> > > > >  - Fix packet len calculation.
> > > > > 
> > > > >  v3 Changes:
> > > > >  - Split the patches for easier review. (Thomas)
> > > > >  - Remove unnecessary assignments to 0. (Bernard)
> > > > > 
> > > > >  v2 Changes:
> > > > >  - Use bulk ops for fetching segments. (Andrew Rybchenko)
> > > > >  - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew
> > > > > Rybchenko)
> > > > >  - Fix mbufs not being freed when there is no more mbufs
> > > > > available for
> > > > >  segments. (Andrew Rybchenko)
> > > > 
> > > > Hi Thomas, Shahafs,
> > > > 
> > > > I guess there was a performance issue on Mellanox with this
> > > > patch,
> > > > I assume it
> > > > is still valid, since this version only has some cosmetic
> > > > change,
> > > > but can you
> > > > please confirm?
> > > 
> > > We will check it.
> > > 
> > > > And what is the next step, can you guys provide some info to
> > > > Pavan
> > > > to solve the
> > > > issue, or perhaps even better a fix?
> > > 
> > > Looking at the first patch, there are still 3 changes merged
> > > together.
> > > Why not splitting even more?
> > 
> > Splitting further more is not a issue. But we should not start the
> > thread for squashing it patch latter. What would be interesting to
> > know
> > if there is any performance degradation with Mellanox NIC? If so,
> > Why?
> > Based on that, We can craft the patch as you need.
> 
> Regarding Mellanox degradation, we need to check if this is in this
> patch.

Please check.

> 
> Not related to performance degradation, it is a good practice
> to split logical changes of a rework.

Yes. No disagreement here. What I would like to emphasis here is that,
It is critical to know, Is there any degradation as this patch has been
blocked by saying there is a degradation with Mellanox NIC.

It is trivial to split the patches N more logical one. The former one
is complex and it has dependency.




> 
> 

Reply via email to