On 2021/1/9 上午10:49, Charlie Somerville wrote:
This patch series introduces a new flag XDP_FLAGS_NO_TX which prevents
the allocation of additional send queues for XDP programs.


This part I don't understand. Is such flag a must? I think the answer is probably not.

Why not simply do:

1) if we had sufficient TX queues, use dedicated TX queues for XDP_TX
2) if we don't, simple synchronize through spin_lock[1]

Thanks

[1] https://www.spinics.net/lists/bpf/msg32587.html



Included in this patch series is an implementation of XDP_FLAGS_NO_TX
for the virtio_net driver. This flag is intended to be advisory - not
all drivers must implement support for it.

Many virtualised environments only provide enough virtio_net send queues
for the number of processors allocated to the VM:

# nproc
8
# ethtool --show-channels ens3
Channel parameters for ens3:
Pre-set maximums:
RX:     0
TX:     0
Other:      0
Combined:   8

In this configuration XDP is unusable because the virtio_net driver
always tries to allocate an extra send queue for each processor - even
if the XDP the program never uses the XDP_TX functionality.

While XDP_TX is still unavailable in these environments, this new flag
expands the set of XDP programs that can be used.

This is my first contribution to the kernel, so apologies if I've sent
this to the wrong list. I have tried to cc relevant maintainers but
it's possible I may have missed some people. I'm looking forward to
receiving feedback on this change.

Charlie Somerville (2):
   xdp: Add XDP_FLAGS_NO_TX flag
   virtio_net: Implement XDP_FLAGS_NO_TX support

  drivers/net/virtio_net.c     | 17 +++++++++++++----
  include/uapi/linux/if_link.h |  5 ++++-
  2 files changed, 17 insertions(+), 5 deletions(-)


Reply via email to