On Fri, 15 Feb 2019 20:38:59 +0100
Thomas Monjalon <tho...@monjalon.net> wrote:

> 15/02/2019 19:42, Ananyev, Konstantin:
> > >>> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of David Marchand
> > >>> I am also for option 2 especially because of this.
> > >>> A driver that refuses a packet for reason X (which is a limitation, or 
> > >>> an
> > >>> incorrect config or whatever that is not a transient condition) but 
> > >>> gives
> > >>> it back to the application is a bad driver.  
> >   
> > >>Why? What.s wrong to leave it to the upper layer to decide what to
> > >>do with the packets that can't be sent (by one reason or another)?  
> >   
> > >How does the upper layer know if this is a transient state or something 
> > >that can't be resolved?  
> > 
> > Via rte_errno, for example.  
> 
> rte_errno is not a result per packet.
> I think it is better to "eat" the packet
> as it is done for those transmitted to the HW.
> 
> 

First off rte_errno doesn't work for a burst API.

IMHO (which matches /2) all drivers should only increment oerrors for something 
for
a packet which it could not transmit because of hardware condition (link down 
etc)
or mbuf which has parameters which can not be handled. In either case, the 
packet
must be dropped by driver and oerrors incremented.  The driver should also 
maintain
internal stats (available by xstats) for any conditions like this.

When no tx descriptors are available, the driver must not increment any counter
and return partial success to the application. If application then wants to do
back pressure or drop it should keep its own statistics.

This is close to the original model in the Intel drivers, and matches what BSD 
and
Linux do on the OS level for drivers. Like many driver assumptions the corner
cases were not explicitly documented and new drivers probably don't follow
the same pattern.

Reply via email to