On Tue, 2018-09-18 at 12:19 -0400, Song Liu wrote: > > On Sep 18, 2018, at 6:45 AM, Eric Dumazet <eduma...@google.com> > > wrote: > > > > On Tue, Sep 18, 2018 at 1:41 AM Song Liu <songliubrav...@fb.com> > > wrote: > > > > > > We are debugging this issue that netconsole message triggers > > > pegged softirq > > > (ksoftirqd taking 100% CPU for many seconds). We found this issue > > > in > > > production with both bnxt and ixgbe, on a 4.11 based kernel. This > > > is easily > > > reproducible with ixgbe on 4.11, and latest net/net-next (see [1] > > > for more > > > detail). > > > > > > After debugging for some time, we found that this issue is likely > > > related > > > to 39e6c8208d7b ("net: solve a NAPI race"). After reverting this > > > commit, > > > the steps described in [1] cannot reproduce the issue on ixgbe. > > > Reverting > > > this commit also reduces the chances we hit the issue with bnxt > > > (it still > > > happens with a lower rate). > > > > > > I tried to fix this issue with relaxed variant (or older version) > > > of > > > napi_schedule_prep() in netpoll, just like the one on > > > napi_watchdog(). > > > However, my tests do not always go as expected. > > > > > > Please share your comments/suggestions on which direction shall > > > we try > > > to fix this. > > > > > > Thanks in advance! > > > Song > > > > > > > > > [1] > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.spinics.net_lists_netdev_msg522328.html&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=i6WobKxbeG3slzHSIOxTVtYIJw7qjCE6S0spDTKL-J4&m=iSaOapj1kxjhGYLgQr0Qd8mQCzVdobmgT1L4JwFvzxs&s=lCEhrz6wQJUUaJOkxFmtOszAgkf3Jh4reX_i1GbI5RI&e= > > > > You have not traced ixgbe to understand why driver hits > > "clean_complete=false" all the time ? > > The trace showed that we got "clean_complete=false" because > ixgbe_clean_rx_irq() used all budget (64). It feels like the driver > is tricked to process old data on the rx_ring for one more time. > > Have you seen similar issue?
A quick reading of the code suggests that means polling cannot keep up with the rate of incoming packets. That should not be a surprise, given that polling appears to happen on just one CPU, while interrupt driven packet delivery was fanned out across a larger number of CPUs. Does the NAPI code have any way in which it periodically force-returns to IRQ mode, because multiple CPUs in IRQ mode can keep up with packets better than a single CPU in polling mode? Alternatively, is NAPI with multi-queue network adapters supposed to be polling on multiple CPUs, but simply failing to do so in this case?