On 8/3/20 1:58 PM, Viacheslav Ovsiienko wrote:
> The DPDK datapath in the transmit direction is very flexible.
> The applications can build multisegment packets and manages
> almost all data aspects - the memory pools where segments
> are allocated from, the segment lengths, the memory attributes
> like external, registered, etc.
>
> In the receiving direction, the datapath is much less flexible,
> the applications can only specify the memory pool to configure
> the receiving queue and nothing more. In order to extend the
> receiving datapath capabilities it is proposed to add the new
> fields into rte_eth_rxconf structure:
>
> struct rte_eth_rxconf {
>     ...
>     uint16_t rx_split_num; /* number of segments to split */
>     uint16_t *rx_split_len; /* array of segment lengthes */
>     struct rte_mempool **mp; /* array of segment memory pools */
>     ...
> };
>
> The non-zero value of rx_split_num field configures the receiving
> queue to split ingress packets into multiple segments to the mbufs
> allocated from various memory pools according to the specified
> lengths. The zero value of rx_split_num field provides the
> backward compatibility and queue should be configured in a regular
> way (with single/multiple mbufs of the same data buffer length
> allocated from the single memory pool).

>From the above description it is not 100% clear how it will
coexist with:
 - existing mb_pool argument of the rte_eth_rx_queue_setup()
 - DEV_RX_OFFLOAD_SCATTER
 - DEV_RX_OFFLOAD_HEADER_SPLIT
How will application know that the feature is supported? Limitations?
Is it always split by specified/fixed length?
What happens if header length is actually different?

> The new approach would allow splitting the ingress packets into
> multiple parts pushed to the memory with different attributes.
> For example, the packet headers can be pushed to the embedded data
> buffers within mbufs and the application data into the external
> buffers attached to mbufs allocated from the different memory
> pools. The memory attributes for the split parts may differ
> either - for example the application data may be pushed into
> the external memory located on the dedicated physical device,
> say GPU or NVMe. This would improve the DPDK receiving datapath
> flexibility preserving compatibility with existing API.
>
> Signed-off-by: Viacheslav Ovsiienko <viachesl...@mellanox.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst 
> b/doc/guides/rel_notes/deprecation.rst
> index ea4cfa7..cd700ae 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -99,6 +99,11 @@ Deprecation Notices
>    In 19.11 PMDs will still update the field even when the offload is not
>    enabled.
>  
> +* ethdev: add new fields to ``rte_eth_rxconf`` to configure the receiving
> +  queues to split ingress packets into multiple segments according to the
> +  specified lengths into the buffers allocated from the specified
> +  memory pools. The backward compatibility to existing API is preserved.
> +
>  * ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
>    will be deprecated in 20.11 and will be removed in 21.11.
>    Existing ``rte_eth_rx_descriptor_status`` and 
> ``rte_eth_tx_descriptor_status``

Reply via email to