Re: [vpp-dev] vpp crashes on deleting route 0.0.0.0/0 via interface #vpp

2020-01-15 Thread Aleksander Djuric
Hi!

Probably this patch can help:
https://gerrit.fd.io/r/c/vpp/+/24341

Regards
---
Aleksander
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15180): https://lists.fd.io/g/vpp-dev/message/15180
Mute This Topic: https://lists.fd.io/mt/69689561/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Arm verify job broken

2020-01-15 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
> accidentally separated two scripts that had to stay together.

My fault, I had no idea this kind of coupling existed.
Adding comments [1] to prevent similar errors in future.

Vratko.

[1] https://gerrit.fd.io/r/c/ci-management/+/24348

From: vpp-dev@lists.fd.io  On Behalf Of Christian Hopps
Sent: Wednesday, January 15, 2020 1:10 AM
To: Ed Kern (ejk) 
Cc: Christian Hopps ; Andrew 👽 Yourtchenko 
; Florin Coras ; vpp-dev 

Subject: Re: [vpp-dev] Arm verify job broken



> On Jan 14, 2020, at 6:27 PM, Ed Kern via Lists.Fd.Io 
> mailto:ejk=cisco@lists.fd.io>> wrote:
>
> short answer: fixed..
>
>
> longer answer:  a ci-management patch which did a whole bunch of cleanup and 
> readability improvements accidentally
> separated two scripts that had to stay together.  Having the script apart 
> meant arm was using only 1 core instead
> of the correct number of 16 cores.
>
> This lead to all arm jobs answering the question ‘what does a yellow light 
> mean’  (apologies to those that miss the taxi reference)

Now that's a blast from the past... :)

Thanks,
Chris.


>
> ci-man patch being merged now…all subsequent jobs should be back to normal 
> timing.
>
> Ed
>
>
>
>> On Jan 14, 2020, at 1:54 PM, Andrew 👽 Yourtchenko 
>> mailto:ayour...@gmail.com>> wrote:
>>
>> better to debug some before changing anything.
>>
>> With my RM hat on I would rather understand what is going on, if it is 24 
>> hours before the RC1 milestone ;-)
>>
>> --a
>>
>>> On 14 Jan 2020, at 21:28, Ed Kern via Lists.Fd.Io 
>>> mailto:ejk=cisco@lists.fd.io>> wrote:
>>>
>>>  ok FOR THE MOST PART…  build times haven’t changed  (@20 min)
>>>
>>> make test portion has gone off the chain and is often hitting the 120 min 
>>> build timeout threshold.
>>>
>>> Ive currently got several jobs running on sandbox testing
>>>
>>> a. without any timeouts (just to see if we are still passing on master)
>>> b. three different memory and disk profiles (just in case someone/thing got 
>>> messy and we have had another memory spike)
>>>
>>> the bad news is that it will be at least a couple of hours more than likely 
>>> before I get any results that I feel are actionable.
>>>
>>>
>>> Having said that im happy to shoot the build timeout for the time being if 
>>> committers feel that is warranted to ‘keep the train rolling’
>>>
>>> Ed
>>>
>>>
>>>
>>>
 On Jan 14, 2020, at 1:07 PM, Florin Coras 
 mailto:fcoras.li...@gmail.com>> wrote:

 Thanks, Ed!

 Florin

> On Jan 14, 2020, at 11:53 AM, Ed Kern (ejk) 
> mailto:e...@cisco.com>> wrote:
>
> looking into it now…..
>
> its a strange one ill tell you that up front…failures are all over the 
> place inside the build and even some hitting the 120 min timeout….
>
> more as i dig hopefully
>
> thanks for the ping
>
> Ed
>
>
>
>> On Jan 14, 2020, at 11:40 AM, Florin Coras 
>> mailto:fcoras.li...@gmail.com>> wrote:
>>
>> Hi,
>>
>> Jobs have been failing since yesterday. Did anybody try to look into it?
>>
>> Regards,
>> Florin
>>
>

>>>
>>>
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15181): https://lists.fd.io/g/vpp-dev/message/15181
Mute This Topic: https://lists.fd.io/mt/69698855/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-01-15 14:00:24 UTC

2020-01-15 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 2
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15182): https://lists.fd.io/g/vpp-dev/message/15182
Mute This Topic: https://lists.fd.io/mt/69716939/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp crashes on deleting route 0.0.0.0/0 via interface #vpp

2020-01-15 Thread Segey Yelantsev
Hi! 2Neale: my scenario is taking MPLS traffic, routing it to a specific ip table based on MPLS label. Then IP packets should be routed within that specific table with some routes pointing to virtual devices, including a default. The backtrace and reason for firing assert is the same as in this gre scenario. And it works for table 0, but I wonder why it doesn't work for a non-zero ip table. Unfotunately, Aleksander's patch did not help:```DBGvpp# ip table add 10DBGvpp# create gre tunnel src 1.1.1.1 dst 2.2.2.2gre0DBGvpp# ip route add 0.0.0.0/0 table 10 via gre0DBGvpp# sh ip fib table 10 0.0.0.0/0ipv4-VRF:10, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 flags:none locks:[CLI:1, ]0.0.0.0/0 fib:1 index:7 locks:3CLI refs:1 entry-flags:attached,import, src-flags:added,contributing,active,path-list:[16] locks:2 flags:shared, uPRF-list:12 len:1 itfs:[1, ]path:[16] pl-index:16 ip4 weight=1 pref=0 attached:gre0 default-route refs:1 entry-flags:drop, src-flags:added,path-list:[11] locks:1 flags:drop, uPRF-list:7 len:0 itfs:[]path:[11] pl-index:11 ip4 weight=1 pref=0 special: cfg-flags:drop,[@0]: dpo-drop ip4 forwarding: unicast-ip4-chain[@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:12 to:[0:0]][0] [@0]: dpo-drop ip4DBGvpp# ip route del 0.0.0.0/0 table 10 via gre0/home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367 (fib_attached_export_purge) assertion `NULL != fed' fails Thread 1 "vpp_main" received signal SIGABRT, Aborted.__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:5151 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.(gdb) bt#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51#1 0x75ac9801 in __GI_abort () at abort.c:79#2 0xbe0b in os_panic () at /home/elantsev/vpp/src/vpp/vnet/main.c:355#3 0x75eacde9 in debugger () at /home/elantsev/vpp/src/vppinfra/error.c:84#4 0x75ead1b8 in _clib_error (how_to_die=2, function_name=0x0, line_number=0, fmt=0x77743bb8 "%s:%d (%s) assertion `%s' fails") at /home/elantsev/vpp/src/vppinfra/error.c:143#5 0x774cbbe9 in fib_attached_export_purge (fib_entry=0x7fffb83886b0) at /home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367#6 0x774919de in fib_entry_post_flag_update_actions (fib_entry=0x7fffb83886b0, old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT)) at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:674#7 0x77491a3c in fib_entry_post_install_actions (fib_entry=0x7fffb83886b0, source=FIB_SOURCE_DEFAULT_ROUTE, old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:709#8 0x77491d78 in fib_entry_post_update_actions (fib_entry=0x7fffb83886b0, source=FIB_SOURCE_DEFAULT_ROUTE, old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:804#9 0x774923f9 in fib_entry_source_removed (fib_entry=0x7fffb83886b0, old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT)) at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:992#10 0x774925e7 in fib_entry_path_remove (fib_entry_index=7, source=FIB_SOURCE_CLI, rpaths=0x7fffb83a34b0) at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:1072#11 0x7747980b in fib_table_entry_path_remove2 (fib_index=1, prefix=0x7fffb8382b00, source=FIB_SOURCE_CLI, rpaths=0x7fffb83a34b0) at /home/elantsev/vpp/src/vnet/fib/fib_table.c:680#12 0x76fb870c in vnet_ip_route_cmd (vm=0x766b6680 , main_input=0x7fffb8382f00, cmd=0x7fffb50373b8) at /home/elantsev/vpp/src/vnet/ip/lookup.c:449#13 0x763d402f in vlib_cli_dispatch_sub_commands (vm=0x766b6680 , cm=0x766b68b0 , input=0x7fffb8382f00, parent_command_index=431)at /home/elantsev/vpp/src/vlib/cli.c:568#14 0x763d3ead in vlib_cli_dispatch_sub_commands (vm=0x766b6680 , cm=0x766b68b0 , input=0x7fffb8382f00, parent_command_index=0)at /home/elantsev/vpp/src/vlib/cli.c:528#15 0x763d4434 in vlib_cli_input (vm=0x766b6680 , input=0x7fffb8382f00, function=0x7646dc89 , function_arg=0) at /home/elantsev/vpp/src/vlib/cli.c:667#16 0x7647476d in unix_cli_process_input (cm=0x766b7020 , cli_file_index=0) at /home/elantsev/vpp/src/vlib/unix/cli.c:2572#17 0x7647540e in unix_cli_process (vm=0x766b6680 , rt=0x7fffb8342000, f=0x0) at /home/elantsev/vpp/src/vlib/unix/cli.c:2688#18 0x764161d4 in vlib_process_bootstrap (_a=140736272894320) at /home/elantsev/vpp/src/vlib/main.c:1475#19 0x75eccfbc in clib_calljmp () at /home/elantsev/vpp/src/vppinfra/longjmp.S:123#20 0x7fffb78d8940 in ?? ()#21 0x764162dc in vlib_process_startup (vm=0x0, p=0x8, f=0x766b6680 ) at /home/elantsev/vpp/src/vlib/main.c:1497Backtrace stopped: previous frame inner to this frame (corrupt stack?)(gdb)``` 15.01.2020, 01:54, "Neale Ranns via Lists.Fd.Io" :Hi, Thanks for the bug report, I’ll fix the crash. A question for you.DBGvpp# ip route add 0.0.0.0/0 table 10 via gre0 Says “all destinati

[vpp-dev] Issues in VNET infra

2020-01-15 Thread Rajith PR
Hello Team,

During our integration with VPP stack we have found a couple of problems in
VNET infra and would like to seek your help in resolving these:-

1.  Is there any way to disable a hardware interface( Eg. memif interface
or a host
interface). vnet_hw_interface_t not  vnet_hw_interface_flags_t seem to not
have an attribute nor state for admin enable/disable.
2.  Is there any way to disable untagged software interface? It seems in
VPP, the untagged software interface thats gets created is also the parent
port and disabling it has the implication of disabling all the sub
interfaces under that hardware interface.
3. With regards to memif interface,in a single vpp instance we are not able
to create two memif interfaces with one as master and another as slave. Can
some one let us know how this can be done?

Thanks,
Rajith
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15184): https://lists.fd.io/g/vpp-dev/message/15184
Mute This Topic: https://lists.fd.io/mt/69718725/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp crashes on deleting route 0.0.0.0/0 via interface

2020-01-15 Thread Segey Yelantsev
Hello Everyone!

I've encountered an issue with deleting route to 0.0.0.0/0 via some virtual 
interface: vpp crashed with a SIGABRT. This issue can be reproduced with gre 
interface on the current master 1c6486f7b8a00a1358d5c8f4ea1d874073bbcd6c:

DBGvpp# ip table add 10
DBGvpp# create gre tunnel src 1.1.1.1 dst 2.2.2.2
gre0

DBGvpp# ip route add 0.0.0.0/0 table 10 via gre0
DBGvpp# sh ip fib table 10
ipv4-VRF:10, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 
flags:none locks:[CLI:1, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:12 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:8 to:[0:0]]
[0] [@0]: dpo-drop ip4
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:10 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:9 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:11 to:[0:0]]
[0] [@0]: dpo-drop ip4

/home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367 
(fib_attached_export_purge) assertion `NULL != fed' fails

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.

(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x75ac9801 in __GI_abort () at abort.c:79
#2  0xbe0b in os_panic () at 
/home/elantsev/vpp/src/vpp/vnet/main.c:355
#3  0x75eacde9 in debugger () at 
/home/elantsev/vpp/src/vppinfra/error.c:84
#4  0x75ead1b8 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x77743bb8 "%s:%d (%s) assertion `%s' fails") at 
/home/elantsev/vpp/src/vppinfra/error.c:143
#5  0x774cbbe0 in fib_attached_export_purge (fib_entry=0x7fffb4bd2dd0) 
at /home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367
#6  0x774919de in fib_entry_post_flag_update_actions 
(fib_entry=0x7fffb4bd2dd0, old_flags=(FIB_ENTRY_FLAG_ATTACHED | 
FIB_ENTRY_FLAG_IMPORT)) at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:674
#7  0x77491a3c in fib_entry_post_install_actions 
(fib_entry=0x7fffb4bd2dd0, source=FIB_SOURCE_DEFAULT_ROUTE, 
old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))
at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:709
#8  0x77491d78 in fib_entry_post_update_actions 
(fib_entry=0x7fffb4bd2dd0, source=FIB_SOURCE_DEFAULT_ROUTE, 
old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))
at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:804
#9  0x774923f9 in fib_entry_source_removed (fib_entry=0x7fffb4bd2dd0, 
old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT)) at 
/home/elantsev/vpp/src/vnet/fib/fib_entry.c:992
#10 0x774925e7 in fib_entry_path_remove (fib_entry_index=7, 
source=FIB_SOURCE_CLI, rpaths=0x7fffb83a3520) at 
/home/elantsev/vpp/src/vnet/fib/fib_entry.c:1072
#11 0x7747980b in fib_table_entry_path_remove2 (fib_index=1, 
prefix=0x7fffb8382b00, source=FIB_SOURCE_CLI, rpaths=0x7fffb83a3520) at 
/home/elantsev/vpp/src/vnet/fib/fib_table.c:680
#12 0x76fb870c in vnet_ip_route_cmd (vm=0x766b6680 
, main_input=0x7fffb8382f00, cmd=0x7fffb50373b8) at 
/home/elantsev/vpp/src/vnet/ip/lookup.c:449
#13 0x763d402f in vlib_cli_dispatch_sub_commands (vm=0x766b6680 
, cm=0x766b68b0 , 
input=0x7fffb8382f00, parent_command_index=431)
at /home/elantsev/vpp/src/vlib/cli.c:568
#14 0x763d3ead in vlib_cli_dispatch_sub_commands (vm=0x766b6680 
, cm=0x766b68b0 , 
input=0x7fffb8382f00, parent_command_index=0)
at /home/elantsev/vpp/src/vlib/cli.c:528
#15 0x763d4434 in vlib_cli_input (vm=0x766b6680 , 
input=0x7fffb8382f00, function=0x7646dc89 , 
function_arg=0) at /home/elantsev/vpp/src/vlib/cli.c:667
#16 0x7647476d in unix_cli_process_input (cm=0x766b7020 
, cli_file_index=0) at 
/home/elantsev/vpp/src/vlib/unix/cli.c:2572
#17 0x7647540e in unix_cli_process (vm=0x766b6680 
, rt=0x7fffb8342000, f=0x0) at 
/home/elantsev/vpp/src/vlib/unix/cli.c:2688
#18 0x764161d4 in vlib_process_bootstrap (_a=140736272894320) at 
/home/elantsev/vpp/src/vlib/main.c:1475
#19 0x75eccfbc in clib_calljmp () at 
/home/elantsev/vpp/src/vppinfra/longjmp.S:123
#20 0x7fffb78d8940 in ?? ()
#21 0x764162dc in vlib_process_startup (vm=0x0, p=0x8, f=0x766b6680 
) at /home/elantsev/vpp/src/vlib/main.c:1497
Backtrace stopped: previous frame inner to this frame (corrupt stack?)


#4  0x75ead1b8 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x77743bb8 "%s:%d (%s) assertion `%s' fails") at 
/home/elantsev/vpp/src/vppinfra/error.c:143
msg = 0x0
va = {{gp_o

Re: [vpp-dev] vpp crashes on deleting route 0.0.0.0/0 via interface #vpp

2020-01-15 Thread Neale Ranns via Lists.Fd.Io

Hi Sergey,

It would work if your gre interface were also in the non-default table, i.e.
  set int ip table 10 gre0

/neale

From: Segey Yelantsev 
Date: Thursday 16 January 2020 at 02:16
To: "Neale Ranns (nranns)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] vpp crashes on deleting route 0.0.0.0/0 via interface 
#vpp

Hi!

2Neale: my scenario is taking MPLS traffic, routing it to a specific ip table 
based on MPLS label. Then IP packets should be routed within that specific 
table with some routes pointing to virtual devices, including a default. The 
backtrace and reason for firing assert is the same as in this gre scenario. And 
it works for table 0, but I wonder why it doesn't work for a non-zero ip table.

Unfotunately, Aleksander's patch did not help:
```
DBGvpp# ip table add 10
DBGvpp# create gre tunnel src 1.1.1.1 dst 2.2.2.2
gre0
DBGvpp# ip route add 0.0.0.0/0 table 10 via gre0
DBGvpp# sh ip fib table 10 0.0.0.0/0
ipv4-VRF:10, fib_index:1, flow hash:[src dst sport dport proto ] epoch:0 
flags:none locks:[CLI:1, ]
0.0.0.0/0 fib:1 index:7 locks:3
CLI refs:1 entry-flags:attached,import, src-flags:added,contributing,active,
path-list:[16] locks:2 flags:shared, uPRF-list:12 len:1 itfs:[1, ]
path:[16] pl-index:16 ip4 weight=1 pref=0 attached:
gre0

default-route refs:1 entry-flags:drop, src-flags:added,
path-list:[11] locks:1 flags:drop, uPRF-list:7 len:0 itfs:[]
path:[11] pl-index:11 ip4 weight=1 pref=0 special: cfg-flags:drop,
[@0]: dpo-drop ip4

forwarding: unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:12 to:[0:0]]
[0] [@0]: dpo-drop ip4
DBGvpp# ip route del 0.0.0.0/0 table 10 via gre0
/home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367 
(fib_attached_export_purge) assertion `NULL != fed' fails

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x75ac9801 in __GI_abort () at abort.c:79
#2 0xbe0b in os_panic () at 
/home/elantsev/vpp/src/vpp/vnet/main.c:355
#3 0x75eacde9 in debugger () at 
/home/elantsev/vpp/src/vppinfra/error.c:84
#4 0x75ead1b8 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x77743bb8 "%s:%d (%s) assertion `%s' fails") at 
/home/elantsev/vpp/src/vppinfra/error.c:143
#5 0x774cbbe9 in fib_attached_export_purge (fib_entry=0x7fffb83886b0) 
at /home/elantsev/vpp/src/vnet/fib/fib_attached_export.c:367
#6 0x774919de in fib_entry_post_flag_update_actions 
(fib_entry=0x7fffb83886b0, old_flags=(FIB_ENTRY_FLAG_ATTACHED | 
FIB_ENTRY_FLAG_IMPORT)) at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:674
#7 0x77491a3c in fib_entry_post_install_actions 
(fib_entry=0x7fffb83886b0, source=FIB_SOURCE_DEFAULT_ROUTE, 
old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))
at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:709
#8 0x77491d78 in fib_entry_post_update_actions 
(fib_entry=0x7fffb83886b0, source=FIB_SOURCE_DEFAULT_ROUTE, 
old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT))
at /home/elantsev/vpp/src/vnet/fib/fib_entry.c:804
#9 0x774923f9 in fib_entry_source_removed (fib_entry=0x7fffb83886b0, 
old_flags=(FIB_ENTRY_FLAG_ATTACHED | FIB_ENTRY_FLAG_IMPORT)) at 
/home/elantsev/vpp/src/vnet/fib/fib_entry.c:992
#10 0x774925e7 in fib_entry_path_remove (fib_entry_index=7, 
source=FIB_SOURCE_CLI, rpaths=0x7fffb83a34b0) at 
/home/elantsev/vpp/src/vnet/fib/fib_entry.c:1072
#11 0x7747980b in fib_table_entry_path_remove2 (fib_index=1, 
prefix=0x7fffb8382b00, source=FIB_SOURCE_CLI, rpaths=0x7fffb83a34b0) at 
/home/elantsev/vpp/src/vnet/fib/fib_table.c:680
#12 0x76fb870c in vnet_ip_route_cmd (vm=0x766b6680 
, main_input=0x7fffb8382f00, cmd=0x7fffb50373b8) at 
/home/elantsev/vpp/src/vnet/ip/lookup.c:449
#13 0x763d402f in vlib_cli_dispatch_sub_commands (vm=0x766b6680 
, cm=0x766b68b0 , 
input=0x7fffb8382f00, parent_command_index=431)
at /home/elantsev/vpp/src/vlib/cli.c:568
#14 0x763d3ead in vlib_cli_dispatch_sub_commands (vm=0x766b6680 
, cm=0x766b68b0 , 
input=0x7fffb8382f00, parent_command_index=0)
at /home/elantsev/vpp/src/vlib/cli.c:528
#15 0x763d4434 in vlib_cli_input (vm=0x766b6680 , 
input=0x7fffb8382f00, function=0x7646dc89 , 
function_arg=0) at /home/elantsev/vpp/src/vlib/cli.c:667
#16 0x7647476d in unix_cli_process_input (cm=0x766b7020 
, cli_file_index=0) at 
/home/elantsev/vpp/src/vlib/unix/cli.c:2572
#17 0x7647540e in unix_cli_process (vm=0x766b6680 
, rt=0x7fffb8342000, f=0x0) at 
/home/elantsev/vpp/src/vlib/unix/cli.c:2688
#18 0x764161d4 in vlib_process_bootstrap (_a=140736272894320) at 
/home/elantsev/vpp/src/vlib/main.c:1475
#19 0x75eccfbc in clib_calljmp () at 
/home/elantsev/vpp/src/vppinfra/longjmp.S:123
#20 0x0

Re: [vpp-dev] vnet_gso_header_offset_parser error if vlib_buffer_t without ethernet_header_t

2020-01-15 Thread Mohsin Kazmi via Lists.Fd.Io
Hello,

Can you please provide details about your setup, configuration, traffic flow 
and interfaces you are using?

-br
Mohsin

From:  on behalf of "jiangxiaom...@outlook.com" 

Date: Wednesday, January 15, 2020 at 3:01 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] vnet_gso_header_offset_parser error if vlib_buffer_t without 
ethernet_header_t

vlib_buffer_t has no ethernet_header_t,  vnet_gso_header_offset_parser  will 
failed.
I think if vlib_buffer_t valid l2 header, vnet_gso_header_offset_parser  should 
skip ethernet_header_t parse.
Below is the VPP crash message:

0: /home/dev/code/net-base/build/vpp/src/vnet/ip/ip.h:205 
(ip_calculate_l4_checksum) assertion `ip_header_size' fails

Program received signal SIGABRT, Aborted.
0x74a1d337 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:55
55return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb) bt
#0  0x74a1d337 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:55
#1  0x74a1ea28 in __GI_abort () at abort.c:90
#2  0x00407458 in os_panic () at 
/home/dev/code/net-base/build/vpp/src/vpp/vnet/main.c:355
#3  0x75864d1f in debugger () at 
/home/dev/code/net-base/build/vpp/src/vppinfra/error.c:84
#4  0x758650ee in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x776abc88 "%s:%d (%s) assertion `%s' fails") at 
/home/dev/code/net-base/build/vpp/src/vppinfra/error.c:143
#5  0x76ed5472 in ip_calculate_l4_checksum (vm=0x766aa640 
, p0=0x103fdada00, sum0=189151184350281389, 
payload_length=2560, iph=0x103fdadb5e "", ip_header_size=0, l4h=0x0) at 
/home/dev/code/net-base/build/vpp/src/vnet/ip/ip.h:205
#6  0x76edd92f in ip4_tcp_udp_compute_checksum (vm=0x766aa640 
, p0=0x103fdada00, ip0=0x103fdadb5e) at 
/home/dev/code/net-base/build/vpp/src/vnet/ip/ip4_forward.c:1328
#7  0x76ca5772 in calc_checksums (vm=0x766aa640 , 
b=0x103fdada00) at 
/home/dev/code/net-base/build/vpp/src/vnet/interface_output.c:189
#8  0x76ca64fc in vnet_interface_output_node_inline (vm=0x766aa640 
, node=0x7fffd71a3cc0, frame=0x7fffd7936b40, 
vnm=0x77b6a9c0 , hi=0x7fffd78848c0, do_tx_offloads=1) at 
/home/dev/code/net-base/build/vpp/src/vnet/interface_output.c:450
#9  0x76ca67ff in vnet_interface_output_node (vm=0x766aa640 
, node=0x7fffd71a3cc0, frame=0x7fffd7936b40) at 
/home/dev/code/net-base/build/vpp/src/vnet/interface_output.c:542
#10 0x76408776 in dispatch_node (vm=0x766aa640 , 
node=0x7fffd71a3cc0, type=VLIB_NODE_TYPE_INTERNAL, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffd7936b40, 
last_time_stamp=64570511102008) at 
/home/dev/code/net-base/build/vpp/src/vlib/main.c:1208
#11 0x76408f37 in dispatch_pending_node (vm=0x766aa640 
, pending_frame_index=2, last_time_stamp=64570511102008) at 
/home/dev/code/net-base/build/vpp/src/vlib/main.c:1376
#12 0x7640ab9a in vlib_main_or_worker_loop (vm=0x766aa640 
, is_main=1) at 
/home/dev/code/net-base/build/vpp/src/vlib/main.c:1833
#13 0x7640b3df in vlib_main_loop (vm=0x766aa640 ) 
at /home/dev/code/net-base/build/vpp/src/vlib/main.c:1934
#14 0x7640c0a7 in vlib_main (vm=0x766aa640 , 
input=0x7fffd6c8afb0) at /home/dev/code/net-base/build/vpp/src/vlib/main.c:2151
#15 0x76471bdc in thread0 (arg=140737327572544) at 
/home/dev/code/net-base/build/vpp/src/vlib/unix/main.c:650
#16 0x75884ef4 in clib_calljmp () at 
/home/dev/code/net-base/build/vpp/src/vppinfra/longjmp.S:123
#17 0x7fffd070 in ?? ()
#18 0x76472152 in vlib_unix_main (argc=181, argv=0x700fa0) at 
/home/dev/code/net-base/build/vpp/src/vlib/unix/main.c:720
#19 0x00406dcc in main (argc=181, argv=0x700fa0) at 
/home/dev/code/net-base/build/vpp/src/vpp/vnet/main.c:280
(gdb) up 7
(gdb) l
194   else if (is_ip6)
195 {
196   int bogus;
197   ip6_header_t *ip6;
198
199   ip6 =
200 (ip6_header_t *) (vlib_buffer_get_current (b) + gho.l3_hdr_offset);
201   if (b->flags & VNET_BUFFER_F_OFFLOAD_TCP_CKSUM)
202 {
203   th->checksum = 0;
(gdb)
204   th->checksum =
205 ip6_tcp_udp_icmp_compute_checksum (vm, b, ip6, &bogus);
206 }
207   else if (b->flags & VNET_BUFFER_F_OFFLOAD_UDP_CKSUM)
208 {
209   uh->checksum = 0;
210   uh->checksum =
211 ip6_tcp_udp_icmp_compute_checksum (vm, b, ip6, &bogus);
212 }
213 }
(gdb)
214   b->flags &= ~VNET_BUFFER_F_OFFLOAD_TCP_CKSUM;
215   b->flags &= ~VNET_BUFFER_F_OFFLOAD_UDP_CKSUM;
216   b->flags &= ~VNET_BUFFER_F_OFFLOAD_IP_CKSUM;
217 }
218
219 static_always_inline uword
220 vnet_interface_output_node_inline (vlib_main_t * vm,
221vlib_node_runtime_t * node,
222vlib_frame_t * frame,
223vnet_main_t * vnm,
(gdb) print b[0]
$20 = {{cacheline0 = 0x103fdada00 "P", current_data = 80, current_length = 60, 
flags = 23265280, flow_id = 0, ref_count

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-15 Thread Raj Kumar
Hi Florin,
Yes,  [2] patch resolved the  IPv6/UDP receiver issue.
Thanks! for your help.

thanks,
-Raj

On Tue, Jan 14, 2020 at 9:35 PM Florin Coras  wrote:

> Hi Raj,
>
> First of all, with this [1], the vcl test app/client can establish a udpc
> connection. Note that udp will most probably lose packets, so large
> exchanges with those apps may not work.
>
> As for the second issue, does [2] solve it?
>
> Regards,
> Florin
>
> [1] https://gerrit.fd.io/r/c/vpp/+/24332
> [2] https://gerrit.fd.io/r/c/vpp/+/24334
>
> On Jan 14, 2020, at 12:59 PM, Raj Kumar  wrote:
>
> Hi Florin,
> Thanks! for the reply.
>
> I realized the issue with the non-connected case.  For receiving
> datagrams, I was using recvfrom() with DONOT_WAIT flag because of
> that  vppcom_session_recvfrom() api was failing. It expects either 0 or
> MSG_PEEK flag.
>   if (flags == 0)
> rv = vppcom_session_read (session_handle, buffer, buflen);
>   else if (flags & MSG_PEEK) 0x2
> rv = vppcom_session_peek (session_handle, buffer, buflen);
>   else
> {
>   VDBG (0, "Unsupport flags for recvfrom %d", flags);
>   return VPPCOM_EAFNOSUPPORT;
> }
>
>  I changed the flag to 0 in recvfrom() , after that UDP rx is working fine
> but only for IPv4.
>
> I am facing a different issue with IPv6/UDP receiver.  I am getting "no
> listener for dst port" error.
>
> Please let me know if I am doing something wrong.
> Here are the traces : -
>
> [root@orc01 testcode]# VCL_DEBUG=2 LDP_DEBUG=2
> LD_PRELOAD=/opt/vpp/build-root/install-vpp-native/vpp/lib/libvcl_ldpreload.so
>  VCL_CONFIG=/etc/vpp/vcl.cfg ./udp6_rx
> VCL<1164>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<1164>: allocated VCL heap = 0x7ff877439010, size 268435456 (0x1000)
> VCL<1164>: configured rx_fifo_size 400 (0x3d0900)
> VCL<1164>: configured tx_fifo_size 400 (0x3d0900)
> VCL<1164>: configured app_scope_local (1)
> VCL<1164>: configured app_scope_global (1)
> VCL<1164>: configured api-socket-name (/tmp/vpp-api.sock)
> VCL<1164>: completed parsing vppcom config!
> vppcom_connect_to_vpp:549: vcl<1164:0>: app (ldp-1164-app) is connected to
> VPP!
> vppcom_app_create:1067: vcl<1164:0>: sending session enable
> vppcom_app_create:1075: vcl<1164:0>: sending app attach
> vppcom_app_create:1084: vcl<1164:0>: app_name 'ldp-1164-app',
> my_client_index 0 (0x0)
> ldp_init:209: ldp<1164>: configured LDP debug level (2) from env var
> LDP_DEBUG!
> ldp_init:282: ldp<1164>: LDP initialization: done!
> ldp_constructor:2490: LDP<1164>: LDP constructor: done!
> socket:974: ldp<1164>: calling vls_create: proto 1 (UDP), is_nonblocking 0
> vppcom_session_create:1142: vcl<1164:0>: created session 0
> bind:1086: ldp<1164>: fd 32: calling vls_bind: vlsh 0, addr
> 0x7fff9a93efe0, len 28
> vppcom_session_bind:1280: vcl<1164:0>: session 0 handle 0: binding to
> local IPv6 address :: port 8092, proto UDP
> vppcom_session_listen:1312: vcl<1164:0>: session 0: sending vpp listen
> request...
> vcl_session_bound_handler:610: vcl<1164:0>: session 0 [0x1]: listen
> succeeded!
> bind:1102: ldp<1164>: fd 32: returning 0
>
> vpp# sh app server
> Connection  App  Wrk
> [0:0][CT:U] :::8092->:::0   ldp-1164-app[shm] 0
> [#0][U] :::8092->:::0   ldp-1164-app[shm] 0
>
> vpp# sh err
>CountNode  Reason
>  7   dpdk-input   no error
>   2606 ip6-udp-lookup no listener for dst port
>  8arp-reply   ARP replies sent
>  1  arp-disabled  ARP Disabled on this
> interface
> 13ip6-glean   neighbor solicitations
> sent
>   2606ip6-input   valid ip6 packets
>  4  ip6-local-hop-by-hop  Unknown protocol ip6
> local h-b-h packets dropped
>   2606 ip6-icmp-error destination unreachable
> response sent
> 40 ip6-icmp-input valid packets
>  1 ip6-icmp-input neighbor solicitations
> from source not on link
> 12 ip6-icmp-input neighbor solicitations
> for unknown targets
>  1 ip6-icmp-input neighbor advertisements
> sent
>  1 ip6-icmp-input neighbor advertisements
> received
> 40 ip6-icmp-input router advertisements
> sent
> 40 ip6-icmp-input router advertisements
> received
>  1 ip4-icmp-input echo replies sent
> 89   lldp-input   lldp packets received on
> disabled interfaces
>   1328llc-input   unknown llc ssap/dsap
> vpp#
>
> vpp# show trace
> --- Start of thread 0 vpp_main ---
> Packet 1

Re: [vpp-dev] vnet_gso_header_offset_parser error if vlib_buffer_t without ethernet_header_t

2020-01-15 Thread jiangxiaoming
I use *demo.c* create a interface *demo0* with ip 10.0.0.1/29, and then use 
vcl+iperf3 connect server  ip 10.0.0.11, and then the ASSERT crash happend.

VPP start command: *vpp -c as-startup-de192.conf
*
iperf3 start command:
sudo env \
LD_LIBRARY_PATH=/home/dev/code/net-base/dist/script/../lib: \
VCL_CONFIG=vcl-as-e18f7.conf \
LD_PRELOAD=/home/dev/code/net-base/dist/script/../lib/libvcl_ldpreload.so  \
iperf3 -4 -c 10.0.0.11 -p 1000 -t 30


demo.c
Description: Binary data


as.cli
Description: Binary data


as-startup-de192.conf
Description: Binary data


vcl-as-e18f7.conf
Description: Binary data
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15189): https://lists.fd.io/g/vpp-dev/message/15189
Mute This Topic: https://lists.fd.io/mt/69709611/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP 20.01 RC1 milestone is complete - master is open, stable/2001 CLOSED for now

2020-01-15 Thread Andrew Yourtchenko
Hello all,

this is to announce the stable/2001 branch has been created, and
master is open for all of your commits.

The packages for 20.01rc1 are available at usual location, however, I
would like to keep the stable/2001 closed for now, to clean up the
extra packages in the repository. I will send you an email when it is
ready for your fixes.

thanks a lot!

Thanks to Ole Troan for uncovering a potential process optimization
opportunity to make the future rc1 more smooth for everyone.

--a (Your Friendly 20.01 release manager)
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15190): https://lists.fd.io/g/vpp-dev/message/15190
Mute This Topic: https://lists.fd.io/mt/69742477/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-