XDP for idpf is currently 5 chapters: * convert Rx to libeth (this); * convert Tx and stats to libeth; * generic XDP and XSk code changes, libeth_xdp; * actual XDP for idpf via libeth_xdp; * XSk for idpf (^).
Part I does the following: * splits &idpf_queue into 4 (RQ, SQ, FQ, CQ) and puts them on a diet; * ensures optimal cacheline placement, strictly asserts CL sizes; * moves currently unused/dead singleq mode out of line; * reuses libeth's Rx ptype definitions and helpers; * uses libeth's Rx buffer management for both header and payload; * eliminates memcpy()s and coherent DMA uses on hotpath, uses napi_build_skb() instead of in-place short skb allocation. Most idpf patches, except for the queue split, removes more lines than adds. Expect far better memory utilization and +5-8% on Rx depending on the case (+17% on skb XDP_DROP :>). Alexander Lobakin (14): cache: add __cacheline_group_{begin,end}_aligned() (+ couple more) page_pool: use __cacheline_group_{begin,end}_aligned() libeth: add cacheline / struct layout assertion helpers idpf: stop using macros for accessing queue descriptors idpf: split &idpf_queue into 4 strictly-typed queue structures idpf: avoid bloating &idpf_q_vector with big %NR_CPUS idpf: strictly assert cachelines of queue and queue vector structures idpf: merge singleq and splitq &net_device_ops idpf: compile singleq code only under default-n CONFIG_IDPF_SINGLEQ idpf: reuse libeth's definitions of parsed ptype structures idpf: remove legacy Page Pool Ethtool stats libeth: support different types of buffers for Rx idpf: convert header split mode to libeth + napi_build_skb() idpf: use libeth Rx buffer management for payload buffer drivers/net/ethernet/intel/Kconfig | 13 +- drivers/net/ethernet/intel/idpf/Kconfig | 26 + drivers/net/ethernet/intel/idpf/Makefile | 3 +- include/net/page_pool/types.h | 22 +- drivers/net/ethernet/intel/idpf/idpf.h | 11 +- .../net/ethernet/intel/idpf/idpf_lan_txrx.h | 2 + drivers/net/ethernet/intel/idpf/idpf_txrx.h | 734 +++++---- include/linux/cache.h | 59 + include/net/libeth/cache.h | 66 + include/net/libeth/rx.h | 19 + .../net/ethernet/intel/idpf/idpf_ethtool.c | 152 +- drivers/net/ethernet/intel/idpf/idpf_lib.c | 88 +- drivers/net/ethernet/intel/idpf/idpf_main.c | 1 + .../ethernet/intel/idpf/idpf_singleq_txrx.c | 306 ++-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 1412 +++++++++-------- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 178 ++- drivers/net/ethernet/intel/libeth/rx.c | 132 +- net/core/page_pool.c | 3 +- 18 files changed, 1824 insertions(+), 1403 deletions(-) create mode 100644 drivers/net/ethernet/intel/idpf/Kconfig create mode 100644 include/net/libeth/cache.h --- >From v1[0]: * *: pick Reviewed-bys from Jake; * 01: new, add generic __cacheline_group_{begin,end}_aligned() and a couple more cache macros; * 02: new, make use of new macros from 01; * 03: use macros from 01 (no more struct_group()), leave only aggressive assertions here; * 07: adjust to the changes made in 01 and 03; fix typos in the kdocs; * 13: fix typos in the commit message (Jakub); * 14: fix possible unhandled null skb (Simon, static checker). >From RFC[1]: * *: add kdocs where needed and fix the existing ones to build cleanly; fix minor checkpatch and codespell warnings; add RBs from Przemek; * 01: fix kdoc script to understand new libeth_cacheline_group() macro; add an additional assert for queue struct alignment; * 02: pick RB from Mina; * 06: make idpf_chk_linearize() static as it's now used only in one file; * 07: rephrase the commitmsg: HW supports it, but never wants; * 08: fix crashes on some configurations (Mina); * 11: constify header buffer pointer in idpf_rx_hsplit_wa(). Testing hints: basic Rx regression tests (+ perf and memory usage before/after if needed). [0] https://lore.kernel.org/netdev/20240528134846.148890-1-aleksander.loba...@intel.com [1] https://lore.kernel.org/netdev/20240510152620.2227312-1-aleksander.loba...@intel.com -- 2.45.2