If there are unconsumed requests in the ring, but there isn't enough free pending slots, the NAPI instance deschedule itself. As the frontend won't send any more interrupts in this case, it is the task of whoever release the pending slots to schedule the NAPI instance in this case. Originally it was done in the callback, but it's better at the end of the dealloc thread, otherwise there is a risk that the NAPI instance just deschedule itself as the dealloc thread couldn't release any used slot yet. However, as there are a lot of pending packets, NAPI will be scheduled again, and it is very unlikely that the dealloc thread can't release enough slots in the meantime.
Signed-off-by: Zoltan Kiss <zoltan.k...@citrix.com> --- diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index eae9724..07c9677 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -1516,13 +1516,6 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success) wake_up(&vif->dealloc_wq); spin_unlock_irqrestore(&vif->callback_lock, flags); - if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx) && - xenvif_tx_pending_slots_available(vif)) { - local_bh_disable(); - napi_schedule(&vif->napi); - local_bh_enable(); - } - if (likely(zerocopy_success)) vif->tx_zerocopy_success++; else @@ -1594,6 +1587,13 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif) for (i = 0; i < gop - vif->tx_unmap_ops; ++i) xenvif_idx_release(vif, pending_idx_release[i], XEN_NETIF_RSP_OKAY); + + if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx) && + xenvif_tx_pending_slots_available(vif)) { + local_bh_disable(); + napi_schedule(&vif->napi); + local_bh_enable(); + } } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/