Using tcpdump (or other af_packet user) on a busy host can lead to
catastrophic consequences, because suddenly, potentially all cpus
are spinning on a contended spinlock.

Both packet_rcv() and tpacket_rcv() grab the spinlock
to eventually find there is no room for an additional packet.

This patch series align packet_rcv() and tpacket_rcv() to both
check if the queue is full before grabbing the spinlock.

If the queue is full, they both increment a new atomic counter
placed on a separate cache line to let readers drain the queue faster.

There is still false sharing on this new atomic counter,
we might in the future make it per cpu if there is interest.

Eric Dumazet (8):
  net/packet: constify __packet_get_status() argument
  net/packet: constify packet_lookup_frame() and __tpacket_has_room()
  net/packet: constify prb_lookup_block() and __tpacket_v3_has_room()
  net/packet: constify __packet_rcv_has_room()
  net/packet: make tp_drops atomic
  net/packet: implement shortcut in tpacket_rcv()
  net/packet: remove locking from packet_rcv_has_room()
  net/packet: introduce packet_rcv_try_clear_pressure() helper

 net/packet/af_packet.c | 96 ++++++++++++++++++++++++------------------
 net/packet/internal.h  |  1 +
 2 files changed, 56 insertions(+), 41 deletions(-)

-- 
2.22.0.rc2.383.gf4fbbf30c2-goog

Reply via email to