On Sun, 2017-02-26 at 09:40 -0800, Eric Dumazet wrote:
> NAPI_STATE_SCHED
>
> Actually we could use an additional bit for that, that the driver would
> set even if NAPI_STATE_SCHED could not be grabbed.
Just to be clear :
Drivers would require no change, this would be done in
existing helpers.
On Sun, 2017-02-26 at 09:34 -0800, Eric Dumazet wrote:
> I do not believe this bug is mlx4 specific.
>
> Anything doing the following while hard irq were not masked :
>
> local_bh_disable();
> napi_reschedule(&priv->rx_cq[ring]->napi);
> local_bh_enable();
>
> Like in mlx4_en_recover_from_oom()
On Sun, 2017-02-26 at 18:32 +0200, Saeed Mahameed wrote:
> On Sat, Feb 25, 2017 at 4:22 PM, Eric Dumazet wrote:
> > From: Eric Dumazet
> >
> > While playing with hardware timestamping of RX packets, I found
> > that some packets were received by TCP stack with a ~200 ms delay...
> >
> > Since the
On Sat, Feb 25, 2017 at 4:22 PM, Eric Dumazet wrote:
> From: Eric Dumazet
>
> While playing with hardware timestamping of RX packets, I found
> that some packets were received by TCP stack with a ~200 ms delay...
>
> Since the timestamp was provided by the NIC, and my probe was added
> in tcp_v4_
From: Eric Dumazet
While playing with hardware timestamping of RX packets, I found
that some packets were received by TCP stack with a ~200 ms delay...
Since the timestamp was provided by the NIC, and my probe was added
in tcp_v4_rcv() while in BH handler, I was confident it was not
a sender iss