> From: David Marchand [mailto:david.march...@redhat.com]
> Sent: Monday, 2 October 2023 09.34
> 
> On Fri, Aug 4, 2023 at 6:16 PM Stephen Hemminger
> <step...@networkplumber.org> wrote:
> >
> > The ring used to store mbufs needs to be multiple producer,
> > multiple consumer because multiple queues might on multiple
> > cores might be allocating and same time (consume) and in
> > case of ring full, the mbufs will be returned (multiple producer).
> 
> I think I get the idea, but can you rephrase please?
> 
> 
> >
> > Bugzilla ID: 1271
> > Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
> 
> This Fixes: tag looks wrong.
> 
> 
> > Signed-off-by: Stephen Hemminger <step...@networkplumber.org>
> > ---
> >  app/dumpcap/main.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
> > index 64294bbfb3e6..991174e95022 100644
> > --- a/app/dumpcap/main.c
> > +++ b/app/dumpcap/main.c
> > @@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
> >                         data_size = mbuf_size;
> >         }
> >
> > -       mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
> > -                                           MBUF_POOL_CACHE_SIZE, 0,
> > -                                           data_size,
> > -                                           rte_socket_id(), "ring_mp_sc");
> > +       mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
> > +                                    MBUF_POOL_CACHE_SIZE, 0,
> > +                                    data_size, rte_socket_id());
> 
> Switching to rte_pktmbuf_pool_create() still leaves the user with the
> possibility to shoot himself in the foot (I was thinking of setting
> some --mbuf-pool-ops-name EAL option).
> 
> This application has explicit requirements in terms of concurrent
> access (and I don't think the mempool library exposes per driver
> capabilities in that regard).
> The application was enforcing the use of mempool/ring so far.
> 
> I think it is safer to go with an explicit
> rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> WDYT?

<feature creep>
Or perhaps one of "ring_mt_rts" or "ring_mt_hts", if any of those mbuf pool 
drivers are specified on the command line; otherwise fall back to "ring_mp_mc".

Actually, I prefer Stephen's suggestion of using the default mbuf pool driver. 
The option is there for a reason.

However, David is right: We want to prevent the user from using a thread-unsafe 
mempool driver in this use case.

And I guess there might be other use cases than this one, where a thread-safe 
mempool driver is required. So adding a generalized function to get the 
"upgraded" (i.e. thread safe) variant of a mempool driver would be nice.
</feature creep>

Feel free to ignore my suggested feature creep, and go ahead with David's 
suggestion instead.

> 
> 
> >         if (mp == NULL)
> >                 rte_exit(EXIT_FAILURE,
> >                          "Mempool (%s) creation failed: %s\n", pool_name,
> > --
> > 2.39.2
> >
> 
> Thanks.
> 
> --
> David Marchand

Reply via email to