On Fri, Jul 05, 2019 at 03:48:01PM +0200, Olivier Matz wrote:
> External Email
> 
> ----------------------------------------------------------------------
> Hi,
> 
> On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote:
> > 15/03/2019 16:27, Harman Kalra:
> > > Since pdump uses SW rings to manage packets hence
> > > pdump should use SW ring mempool for managing its
> > > own copy of packets.
> > 
> > I'm not sure to understand the reasoning.
> > Reshma, Olivier, Andrew, any opinion?
> > 
> > Let's take a decision for this very old patch.
> 
> From what I understand, many mempools of packets are created, to
> store the copy of dumped packets. I suppose that it may not be
> possible to create as many mempools by using the "best" mbuf pool
> (from rte_mbuf_best_mempool_ops()).
> 
> Using a "ring_mp_mc" as mempool ops should always be possible.
> I think it would be safer to use "ring_mp_mc" instead of
> CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be
> overriden on a specific platform.
> 
> Olivier

Following are some reasons for this patch:
1. As we all know dpdk-pdump app creates a mempool for receiving packets (from 
primary process) into the mbufs, which would get tx'ed into pcap device and 
freed. Using hw mempool for creating mempool for dpdk-pdump mbufs was 
generating segmentation fault because hw mempool vfio is setup by primary 
process and secondary will not have access to its bar regions.

2. Setting up a seperate hw mempool vfio device for secondary generates 
following error:
"cannot find TAILQ entry for PCI device!"
http://git.dpdk.org/dpdk/tree/drivers/bus/pci/linux/pci_vfio.c#n823
which means secondary cannot setup a new device which is not set by primary.

3. Since pdump creates mempool for its own local mbufs, we could not feel the 
requirement for hw mempool, as SW mempool in our opinion is capable enough for 
working in all conditions.

Reply via email to