Hi,

On Fri, 17 Mar 2017 13:09:17 +0800, Yuanhan Liu <yuanhan....@linux.intel.com> 
wrote:
> On Fri, Mar 17, 2017 at 03:46:53AM +0000, Dey, Souvik wrote:
> > Hi ,
> >               I am trying to do rte_pktmbuf_alloc from a mempool within a 
> > secondary process after doing a rte_mempool_lookup for the same mempool. 
> > But the rte_pktmbuf_alloc crashes with the below backtrace  
> 
> I believe it's yet another "accessing a local process pointer in a shared
> memory" issue in the multiple process model. Here is a similar issue I have
> just fixed for virtio pmd in last release.
> 
>     commit 6d890f8ab51295045a53f41c4d2654bb1f01cf38
>     Author: Yuanhan Liu <yuanhan....@linux.intel.com>
>     Date:   Fri Jan 6 18:16:19 2017 +0800
>     
>         net/virtio: fix multiple process support
>     

Another idea is that your 2 processes (primary and secondary) do not
have the same configuration or build system. This was discussed a bit
here:

http://dpdk.org/dev/patchwork/patch/16868/

Can you provide a minimal example application that reproduces the
issue?

Regards,
Olivier


> 
>       --yliu
> > 
> > #0  0x0000000000000000 in ?? ()
> > #1  0x0000000000423da2 in rte_mempool_ops_dequeue_bulk (n=1, 
> > obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) at 
> > /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/dist
> > #2  __mempool_generic_get (flags=<optimized out>, cache=<optimized out>, 
> > n=<optimized out>, obj_table=<optimized out>, mp=<optimized out>)
> >     at 
> > /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1296
> > #3  rte_mempool_generic_get (flags=<optimized out>, cache=<optimized out>, 
> > n=<optimized out>, obj_table=<optimized out>, mp=<optimized out>)
> >     at 
> > /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1334
> > #4  rte_mempool_get_bulk (n=1, obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) 
> > at 
> > /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp
> > #5  rte_mempool_get (obj_p=0x7fffffffd8e0, mp=0x7fe910fbd540) at 
> > /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/r
> > #6  rte_mbuf_raw_alloc (mp=0x7fe910fbd540) at 
> > /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:761
> > #7  rte_pktmbuf_alloc (mp=0x7fe910fbd540) at 
> > /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:1046
> >   
> > >From the trace it looks like that the ops->dequeue is failing as the ops 
> > >is not set properly.  
> > In the primary process I have done a rte_mempool_create with the flags 
> > passed as 0 (indicating mp_mc option). This should have taken care of 
> > setting the ops properly. Also the rte_pktmbuf_alloc calls in the primary 
> > does not give any issues.
> > Both the primary and secondary DPDK app code was working fine with 2.1 
> > DPDK, but now when I am trying to link to the newer DPDK versions like 
> > 16.07/16.11, it is crashing. There is no changes done in the app code.
> > I do see that the complete rte_mempool code has been changed between 2.1 to 
> > 16.07 but could not  find any obvious reasons of the crash. Is my usage 
> > wrong or do we need to pass any new flag to make this work.
> > 
> > Did anyone faced similar issue or any help in this will be great for my 
> > debugging. Thanks in advance for the help.
> > 
> > --
> > Regards,
> > Souvik  

Reply via email to