While BQL bulk dequeue works well for TSO packets, it is
not very efficient as soon as GSO is involved.

On a GSO only workload (UDP or TCP), this patch series
can save about 8 % of cpu cycles on a 40Gbit mlx4 NIC,
by keeping optimal batching, and avoiding expensive
doorbells, qdisc requeues and reschedules.

This patch series :

- Add __netdev_tx_sent_queue() so that drivers
  can implement efficient BQL and xmit_more support.

- Implement a work around in dev_hard_start_xmit()
  for drivers not using __netdev_tx_sent_queue()

- changes mlx4 to use __netdev_tx_sent_queue()

v2: Tariq and Willem feedback addressed.
    added __netdev_tx_sent_queue() (Willem suggestion)


Eric Dumazet (3):
  net: bql: add __netdev_tx_sent_queue()
  net: do not abort bulk send on BQL status
  net/mlx4_en: use __netdev_tx_sent_queue()

 drivers/net/ethernet/mellanox/mlx4/en_tx.c |  6 ++++--
 include/linux/netdevice.h                  | 20 ++++++++++++++++++++
 net/core/dev.c                             |  2 +-
 3 files changed, 25 insertions(+), 3 deletions(-)

-- 
2.19.1.930.g4563a0d9d0-goog

Reply via email to