On Sun, 30 Aug 2020 15:58:57 +0300 Andrew Rybchenko <arybche...@solarflare.com> wrote:
> >>>>> > >>>>> The non-zero value of rx_split_num field configures the receiving > >>>>> queue to split ingress packets into multiple segments to the mbufs > >>>>> allocated from various memory pools according to the specified > >>>>> lengths. The zero value of rx_split_num field provides the backward > >>>>> compatibility and queue should be configured in a regular way (with > >>>>> single/multiple mbufs of the same data buffer length allocated from > >>>>> the single memory pool). > >>>> > >>>> From the above description it is not 100% clear how it will coexist with: > >>>> - existing mb_pool argument of the rte_eth_rx_queue_setup() > >>>> - DEV_RX_OFFLOAD_SCATTER > >>> > >>> DEV_RX_OFFLOAD_SCATTER flag is required to be reported and configured > >>> for the new feature to indicate the application is prepared for the > >>> multisegment packets. > >> > >> I hope it will be mentioned in the feature documentation in the future, but > >> I'm not 100% sure that it is required. See below. > > I suppose there is the hierarchy: > > - applications configures DEV_RX_OFFLOAD_SCATTER on the port and tells in > > this way: > > "Hey, driver, I'm ready to handle multi-segment packets". Readiness in > > general. > > - application configures BUFFER_SPLIT and tells PMD _HOW_ it wants to > > split, in particular way: > > "Hey, driver, please, drop ten bytes here, here and here, and the rest - > > over there" > > My idea is to keep SCATTER and BUFFER_SPLIT independent. > SCATTER is a possibility to make multi-segment packets getting > mbufs from main rxq mempool as many as required. > BUFFER_SPLIT is support of many mempools and splitting > received packets as specified. No. Once again, drivers should take anything from application and rely on using logic to choose best path. Modern CPU's have good branch predictors, and making the developer do that work is counter productive.