I changed it to free and it still happens. Note that the segmentation fault happens before that anyway.
I am using 1.7.1 at the moment. I can try using a newer version. On 23 Mar 2015 17:00, "Bruce Richardson" <bruce.richardson at intel.com> wrote: > On Mon, Mar 23, 2015 at 04:24:18PM +0200, Dor Green wrote: > > I'm running a small app which captures packets on a single lcore and > > then passes it to other workers for processing. > > > > Before even sending it to processing, when checking some minor > > information in the packet mbuf's data I get a segfault. > > > > This code, for example gets a segfault: > > > > struct rte_mbuf *pkts[PKTS_BURST_SIZE]; > > > > for (p = 0; p < portnb; ++p) { > > nbrx = rte_eth_rx_burst(p, 0, pkts, PKTS_BURST_SIZE); > > > > if (unlikely(nbrx == 0)) { > > continue; > > } > > > > for (i = 0; likely(i < nbrx); i++) { > > printf("Pkt %c\n", pkts[i]->pkt->data[0]); > > rte_mempool_put(pktmbuf_pool, (void *const)pkts[i]); > > } > > } > > > > This doesn't happen on most packets, but when I used packets from a > > certain cap it happened often (SSL traffic). In gdb the packet objects > > looked like this: > > {next = 0x0, data = 0x62132136406a6f6, data_len = 263, nb_segs = 1 > > '\001', in_port = 0 '\000', pkt_len = 263, vlan_macip = {data = 55111, > > f = {l3_len = 327, l2_len = 107, vlan_tci = 0}}, hash = { > > rss = 311317915, fdir = {hash = 21915, id = 4750}, sched = > > 311317915}} (Invalid) > > {next = 0x0, data = 0x7ffe43d8f640, data_len = 73, nb_segs = 1 > > '\001', in_port = 0 '\000', pkt_len = 73, vlan_macip = {data = 0, f = > > {l3_len = 0, l2_len = 0, vlan_tci = 0}}, hash = {rss = 311317915, > > fdir = {hash = 21915, id = 4750}, sched = 311317915}} (Valid) > > {next = 0x0, data = 0x7ffe43d7fa40, data_len = 74, nb_segs = 1 '\001', > > in_port = 0 '\000', pkt_len = 74, vlan_macip = {data = 0, f = {l3_len > > = 0, l2_len = 0, vlan_tci = 0}}, hash = {rss = 311317915, > > fdir = {hash = 21915, id = 4750}, sched = 311317915}} (Valid) > > {next = 0x0, data = 0x7ffe43d7ff80, data_len = 66, nb_segs = 1 '\001', > > in_port = 0 '\000', pkt_len = 66, vlan_macip = {data = 0, f = {l3_len > > = 0, l2_len = 0, vlan_tci = 0}}, hash = {rss = 311317915, > > fdir = {hash = 21915, id = 4750}, sched = 311317915}} (Valid) > > {next = 0x0, data = 0x28153a8e63b3afc4, data_len = 263, nb_segs = 1 > > '\001', in_port = 0 '\000', pkt_len = 263, vlan_macip = {data = 59535, > > f = {l3_len = 143, l2_len = 116, vlan_tci = 0}}, hash = { > > rss = 311317915, fdir = {hash = 21915, id = 4750}, sched = > > 311317915}} (Invalid) > > > > Note that in the first packet, the length does not match the actual > > packet length (it does in the last though). The rest of the packets > > are placed in the hugemem range as they should be. > > > > I'm running on Linux 3.2.0-77, the NIC is "10G 2P X520", I have 4 1GB > > huge pages. > > > > Any ideas will be appreciated. > > What version of DPDK are you using? If you update the code to work with the > latest code (or 2.0.0-rc2), does the problem still occur? Also, does it > make > any difference calling rte_pktmbuf_free rather thatn calling mempool_put > directly? > > /Bruce >