Hi, > -----Original Message----- > From: Thomas Monjalon <tho...@monjalon.net> > Sent: Thursday, July 14, 2022 4:09 PM > To: Ding, Xuan <xuan.d...@intel.com> > Cc: andrew.rybche...@oktetlabs.ru; m...@ashroe.eu; dev@dpdk.org; > step...@networkplumber.org; m...@smartsharesystems.com; > dev@dpdk.org; Zhang, Qi Z <qi.z.zh...@intel.com>; asek...@marvell.com; > pbhagavat...@marvell.com; ferruh.yi...@xilinx.com; gr...@u256.net > Subject: Re: [PATCH] doc: announce header split deprecation > > 14/07/2022 07:50, Ding, Xuan: > > From: Thomas Monjalon <tho...@monjalon.net> > > > 23/05/2022 16:20, xuan.d...@intel.com: > > > > From: Xuan Ding <xuan.d...@intel.com> > > > > > > > > RTE_ETH_RX_OFFLOAD_HEADER_SPLIT offload was introduced some > time > > > ago > > > > to substitute bit-field header_split in struct rte_eth_rxmode. It > > > > allows to enable header split offload with the header size > > > > controlled using split_hdr_size in the same structure. > > > > > > > > Right now, no single PMD actually supports > > > > RTE_ETH_RX_OFFLOAD_HEADER_SPLIT with above definition. Many > > > > examples and test apps initialize the field to 0 explicitly. The > > > > most of drivers simply ignore split_hdr_size since the offload is > > > > not advertised, but > > > some double-check that its value is 0. > > > > > > > > So the RTE_ETH_RX_OFFLOAD_HEADER_SPLIT and split_header_size > field > > > > will be removed in DPDK 22.11. > > > > > > > > Signed-off-by: Xuan Ding <xuan.d...@intel.com> > > > > --- > > > > doc/guides/rel_notes/deprecation.rst | 4 ++++ > > > > 1 file changed, 4 insertions(+) > > > > > > > > diff --git a/doc/guides/rel_notes/deprecation.rst > > > > b/doc/guides/rel_notes/deprecation.rst > > > > index 4e5b23c53d..b8114f29ed 100644 > > > > --- a/doc/guides/rel_notes/deprecation.rst > > > > +++ b/doc/guides/rel_notes/deprecation.rst > > > > @@ -125,3 +125,7 @@ Deprecation Notices > > > > applications should be updated to use the ``dmadev`` library instead, > > > > with the underlying HW-functionality being provided by the ``ioat`` > > > > or > > > > ``idxd`` dma drivers > > > > + > > > > +* ethdev: After bit-field header split was removed, the > > > > +``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT`` > > > > +offload and the ``split_hdr_size`` field in structure > > > > +``rte_eth_rxmode`` to enable header split offload are not > > > > +supported in any > > > PMDs. They will be removed in DPDK 22.11. > > > > > > It would have been good to talk about rte_eth_rxseg_split which is > > > similar and configured per-queue. > > > > Thanks for your suggestion. > > > > But I'm a little confused, are you referring that I need to involve protocol > based buffer split? > > About the deprecation of header split, I haven't realized its connection to > rte_eth_rxseg_split. > > What??? > In old versions of your patch "ethdev: introduce protocol type based header > split" > you wrote: > " > A new proto field is introduced in the > rte_eth_rxseg_split structure reserved field to specify header protocol type. > With Rx offload flag RTE_ETH_RX_OFFLOAD_HEADER_SPLIT enabled and > protocol type configured, PMD will split the ingress packets into two separate > regions.
It has a long history... It was corrected in v4 that RTE_ETH_RX_OFFLOAD_HEADER_SPLIT is used to enable header split offload with the header size controlled using "split_hdr_size". But no single PMD actually supports RTE_ETH_RX_OFFLOAD_HEADER_SPLIT for this purpose. So we finally decide to deprecate this flag. http://patchwork.dpdk.org/project/dpdk/patch/20220402104109.472078-2-wenxuanx...@intel.com/ In following series, I use RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT instead. It is for multi-segments packet split. And it still needs a "proto_hdr" field in rte_eth_rxmode to configure split location. > " > > > Currently there are 2 acks, add more PMD maintainers to help review > > this deprecation notice for header split, thanks a lot! > > I cannot say my feeling strong enough. So IMO the deprecation for header split is not relevant with buffer split. But we can still clean the code. Hope it make things clearer. Thanks, Xuan >