> -----Original Message-----
> From: Stephen Hemminger <step...@networkplumber.org>
> Sent: Thursday, August 6, 2020 19:26
> To: Ferruh Yigit <ferruh.yi...@intel.com>
> Cc: Jerin Jacob <jerinjac...@gmail.com>; Slava Ovsiienko
> <viachesl...@mellanox.com>; dpdk-dev <dev@dpdk.org>; Matan Azrad
> <ma...@mellanox.com>; Raslan Darawsheh <rasl...@mellanox.com>;
> Thomas Monjalon <tho...@monjalon.net>; Andrew Rybchenko
> <arybche...@solarflare.com>; Ajit Khaparde
> <ajit.khapa...@broadcom.com>; Maxime Coquelin
> <maxime.coque...@redhat.com>; Olivier Matz <olivier.m...@6wind.com>;
> David Marchand <david.march...@redhat.com>
> Subject: Re: [PATCH] doc: announce changes to ethdev rxconf structure
> 
> On Thu, 6 Aug 2020 16:58:22 +0100
> Ferruh Yigit <ferruh.yi...@intel.com> wrote:
> 
> > On 8/4/2020 2:32 PM, Jerin Jacob wrote:
> > > On Mon, Aug 3, 2020 at 6:36 PM Slava Ovsiienko
> <viachesl...@mellanox.com> wrote:
> > >>
> > >> Hi, Jerin,
> > >>
> > >> Thanks for the comment,  please, see below.
> > >>
> > >>> -----Original Message-----
> > >>> From: Jerin Jacob <jerinjac...@gmail.com>
> > >>> Sent: Monday, August 3, 2020 14:57
> > >>> To: Slava Ovsiienko <viachesl...@mellanox.com>
> > >>> Cc: dpdk-dev <dev@dpdk.org>; Matan Azrad <ma...@mellanox.com>;
> > >>> Raslan Darawsheh <rasl...@mellanox.com>; Thomas Monjalon
> > >>> <tho...@monjalon.net>; Ferruh Yigit <ferruh.yi...@intel.com>;
> > >>> Stephen Hemminger <step...@networkplumber.org>; Andrew
> Rybchenko
> > >>> <arybche...@solarflare.com>; Ajit Khaparde
> > >>> <ajit.khapa...@broadcom.com>; Maxime Coquelin
> > >>> <maxime.coque...@redhat.com>; Olivier Matz
> > >>> <olivier.m...@6wind.com>; David Marchand
> > >>> <david.march...@redhat.com>
> > >>> Subject: Re: [PATCH] doc: announce changes to ethdev rxconf
> > >>> structure
> > >>>
> > >>> On Mon, Aug 3, 2020 at 4:28 PM Viacheslav Ovsiienko
> > >>> <viachesl...@mellanox.com> wrote:
> > >>>>
> > >>>> The DPDK datapath in the transmit direction is very flexible.
> > >>>> The applications can build multisegment packets and manages
> > >>>> almost all data aspects - the memory pools where segments are
> > >>>> allocated from, the segment lengths, the memory attributes like
> external, registered, etc.
> > >>>>
> > >>>> In the receiving direction, the datapath is much less flexible,
> > >>>> the applications can only specify the memory pool to configure
> > >>>> the receiving queue and nothing more. In order to extend the
> > >>>> receiving datapath capabilities it is proposed to add the new
> > >>>> fields into rte_eth_rxconf structure:
> > >>>>
> > >>>> struct rte_eth_rxconf {
> > >>>>     ...
> > >>>>     uint16_t rx_split_num; /* number of segments to split */
> > >>>>     uint16_t *rx_split_len; /* array of segment lengthes */
> > >>>>     struct rte_mempool **mp; /* array of segment memory pools */
> > >>>
> > >>> The pool has the packet length it's been configured for.
> > >>> So I think, rx_split_len can be removed.
> > >>
> > >> Yes, it is one of the supposed options - if pointer to array of
> > >> segment lengths is NULL , the queue_setup() could use the lengths from
> the pool's properties.
> > >> But we are talking about packet split, in general, it should not
> > >> depend on pool properties. What if application provides the single
> > >> pool and just wants to have the tunnel header in the first dedicated
> mbuf?
> > >>
> > >>>
> > >>> This feature also available in Marvell HW. So it not specific to one
> vendor.
> > >>> Maybe we could just the use case mention the use case in the
> > >>> depreciation notice and the tentative change in rte_eth_rxconf and
> > >>> exact details can be worked out at the time of implementation.
> > >>>
> > >> So, if I understand correctly, the struct changes in the commit
> > >> message should be marked as just possible implementation?
> > >
> > > Yes.
> > >
> > > We may need to have a detailed discussion on the correct abstraction
> > > for various HW is available with this feature.
> > >
> > > On Marvell HW, We can configure TWO pools for given eth Rx queue.
> > > One pool can be configured as a small packet pool and other one as
> > > large packet pool.
> > > And there is a threshold value to decide the pool between small and
> large.
> > > For example:
> > > - The small pool is configured 2k
> > > - The large pool is configured with 10k
> > > - And if the threshold value is configured as 2k.
> > > Any packet size <=2K will land in small pool and others in a large pool.
> > > The use case, we are targeting is to save the memory space for jumbo
> frames.
> >
> > Out of curiosity, do you provide two different buffer address in the
> > descriptor and HW automatically uses one based on the size, or driver
> > uses one of the pools based on the configuration and possible largest
> > packet size?
> 
> I am all for allowing more configuration of buffer pool.
> But don't want that to be exposed as a hardware specific requirement in the
> API for applications. The worst case would be if your API changes required:
> 
>   if (strcmp(dev->driver_name, "marvell") == 0) {
>      // make another mempool for this driver
> 
I thought about adding some other segment attributes, vendor specific.
We could describe the segments with some descriptor structure (size, pool)
and add flags field to one. The proposals from other vendors are welcome.

Reply via email to