Hi Ewan, 

First of all, that’s a rather old release, i.e., udp code has been changed 
recently, so I’d recommend you update to 17.10 or soon to 18.01. Second, you 
seem to be crashing while gleaning an entry, so if there’s still a bug, it’s 
probably there. 
From the info provided, most probably you have no handler registered for the 
destination udp port and udp_local generates an ICMP port unreachable error. 
This, I guess, eventually leads to the glean node being hit and the crash. 

I seem to remember that at one point we had a bug whereby a burst of packets 
would lead to buffer depletion in the glean node. To verify this, set a 
breakpoint at “h0 = vlib_packet_template_get_packet(..)” and check if h0 is 
null after the call. If yes, with a predict false, you should continue. 

Florin

> On Dec 22, 2017, at 11:39 PM, yug...@telincn.com wrote:
> 
> 
> Hi all
> If I  send local udp packets to vpp(17.02, the dst ip is local,  but there is 
> no upper layer  processing ) continuely, then vpp goes down like this.
> If I  change the next node of "ip4-udp-lookup" from "ip4-icmp-error" to 
> "error_drop", then everything is ok.
> I'm wondering whether there is a bug, any clue here?  
> 
> Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
> clib_memcpy (n=14, src=0x7fffb7f88be2, dst=0xfffffffffffffff2) at 
> /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vppinfra/memcpy_sse3.h:207
> 207             *(u16 *) dstu = *(const u16 *) srcu;
> (gdb) bt
> #0  clib_memcpy (n=14, src=0x7fffb7f88be2, dst=0xfffffffffffffff2) at 
> /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vppinfra/memcpy_sse3.h:207
> #1  _vnet_rewrite_one_header (most_likely_size=14, max_size=112, packet0=0x0, 
> h0=0x7fffb7f88b70)
>     at 
> /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vnet/rewrite.h:195
> #2  ip4_arp_inline (is_glean=1, frame=<optimized out>, node=0x7fffb5301840, 
> vm=<optimized out>)
>     at 
> /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vnet/ip/ip4_forward.c:2289
> #3  ip4_glean (vm=0x7ffff79aa2a0 <vlib_global_main>, node=0x7fffb5301840, 
> frame=0x7fffb579b200)
>     at 
> /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vnet/ip/ip4_forward.c:2357
> #4  0x00007ffff7757119 in dispatch_node (vm=0x7ffff79aa2a0 
> <vlib_global_main>, node=0x7fffb5301840, type=<optimized out>, 
>     dispatch_state=VLIB_NODE_STATE_POLLING, frame=<optimized out>, 
> last_time_stamp=67522360841383)
>     at /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:998
> #5  0x00007ffff775740d in dispatch_pending_node (vm=vm@entry=0x7ffff79aa2a0 
> <vlib_global_main>, p=0x7fffb8100f70, last_time_stamp=<optimized out>)
>     at /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1144
> #6  0x00007ffff7757e7d in vlib_main_or_worker_loop (is_main=1, 
> vm=0x7ffff79aa2a0 <vlib_global_main>)
>     at /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1588
> #7  vlib_main_loop (vm=0x7ffff79aa2a0 <vlib_global_main>) at 
> /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1608
> #8  vlib_main (vm=vm@entry=0x7ffff79aa2a0 <vlib_global_main>, 
> input=input@entry=0x7fffb4cf8fa0)
>     at /home/vbras/44/VBRASV100R001/vpp1704/build-data/../src/vlib/main.c:1736
> #9  0x00007ffff7790f23 in thread0 (arg=140737347494560) at /home/vbr
> 
> 
> Regards
> Ewan
> yug...@telincn.com 
> <mailto:yug...@telincn.com>_______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
> https://lists.fd.io/mailman/listinfo/vpp-dev 
> <https://lists.fd.io/mailman/listinfo/vpp-dev>
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to