Hello vpp developers,

Thanks to Neal’s patch the reset of IPv6 FIB works well. Unfortunately I still 
have doubts regarding reset of IPv4 FIB – there are still remaining some routes 
for unicast IPv4 addresses in the VRF after its reset (tested on ubuntu16.04 VM 
with vpp_lite). Could somebody of vpp developers let me know if the current 
behaviour is correct, please?

Thanks,
Jan

IPv4 VRF1 before reset:
ipv4-VRF:1, fib_index 1, flow hash: src dst sport dport proto
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:15 buckets:1 uRPF:13 to:[0:0]]
    [0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:16 buckets:1 uRPF:14 to:[0:0]]
    [0] [@0]: dpo-drop ip4
172.16.1.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:22 buckets:1 uRPF:20 to:[0:0]]
    [0] [@2]: dpo-receive: 172.16.1.1 on pg0
172.16.1.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:21 buckets:1 uRPF:19 to:[0:0]]
    [0] [@4]: ipv4-glean: pg0
172.16.1.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:23 buckets:1 uRPF:21 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.1.2 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:02
172.16.1.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:24 buckets:1 uRPF:22 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.1.3 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:03
172.16.1.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:25 buckets:1 uRPF:23 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.1.4 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:04
172.16.1.5/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:26 buckets:1 uRPF:24 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.1.5 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:05
172.16.1.6/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:27 buckets:1 uRPF:25 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.1.6 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:06
172.16.2.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:28 buckets:1 uRPF:26 to:[0:0]]
    [0] [@4]: ipv4-glean: pg1
172.16.2.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:29 buckets:1 uRPF:27 to:[0:0]]
    [0] [@2]: dpo-receive: 172.16.2.1 on pg1
172.16.2.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:30 buckets:1 uRPF:28 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.2.2 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:02
172.16.2.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:31 buckets:1 uRPF:29 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.2.3 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:03
172.16.2.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:32 buckets:1 uRPF:30 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.2.4 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:04
172.16.2.5/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:33 buckets:1 uRPF:31 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.2.5 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:05
172.16.2.6/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:34 buckets:1 uRPF:32 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.2.6 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:06
172.16.3.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:35 buckets:1 uRPF:33 to:[0:0]]
    [0] [@4]: ipv4-glean: pg2
172.16.3.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:36 buckets:1 uRPF:34 to:[0:0]]
    [0] [@2]: dpo-receive: 172.16.3.1 on pg2
172.16.3.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:37 buckets:1 uRPF:35 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.3.2 pg2: IP4: 02:fe:98:fe:c8:77 -> 
02:03:00:00:ff:02
172.16.3.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:38 buckets:1 uRPF:36 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.3.3 pg2: IP4: 02:fe:98:fe:c8:77 -> 
02:03:00:00:ff:03
172.16.3.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:39 buckets:1 uRPF:37 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.3.4 pg2: IP4: 02:fe:98:fe:c8:77 -> 
02:03:00:00:ff:04
172.16.3.5/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:40 buckets:1 uRPF:38 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.3.5 pg2: IP4: 02:fe:98:fe:c8:77 -> 
02:03:00:00:ff:05
172.16.3.6/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:41 buckets:1 uRPF:39 to:[0:0]]
    [0] [@5]: ipv4 via 172.16.3.6 pg2: IP4: 02:fe:98:fe:c8:77 -> 
02:03:00:00:ff:06
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:18 buckets:1 uRPF:16 to:[0:0]]
    [0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:17 buckets:1 uRPF:15 to:[0:0]]
    [0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:19 buckets:1 uRPF:17 to:[0:0]]
    [0] [@0]: dpo-drop ip4

IPv4 VRF1 after reset:
ipv4-VRF:1, fib_index 1, flow hash: src dst sport dport proto
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:15 buckets:1 uRPF:13 to:[0:0]]
    [0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:16 buckets:1 uRPF:14 to:[0:0]]
    [0] [@0]: dpo-drop ip4
172.16.1.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:22 buckets:1 uRPF:20 to:[0:0]]
   [0] [@2]: dpo-receive: 172.16.1.1 on pg0
172.16.1.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:21 buckets:1 uRPF:19 to:[0:0]]
    [0] [@0]: dpo-drop ip4
172.16.2.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:28 buckets:1 uRPF:26 to:[0:0]]
    [0] [@0]: dpo-drop ip4
172.16.2.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:29 buckets:1 uRPF:27 to:[0:0]]
    [0] [@2]: dpo-receive: 172.16.2.1 on pg1
172.16.3.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:35 buckets:1 uRPF:33 to:[0:0]]
    [0] [@0]: dpo-drop ip4
172.16.3.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:36 buckets:1 uRPF:34 to:[0:0]]
    [0] [@2]: dpo-receive: 172.16.3.1 on pg2
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:18 buckets:1 uRPF:16 to:[0:0]]
    [0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:17 buckets:1 uRPF:15 to:[0:0]]
    [0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:19 buckets:1 uRPF:17 to:[0:0]]
    [0] [@0]: dpo-drop ip4

From: Neale Ranns (nranns)
Sent: Monday, February 20, 2017 18:35
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com>; vpp-dev@lists.fd.io
Cc: csit-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

Thanks for the test code.
I have fixed the crash with:
  https://gerrit.fd.io/r/#/c/5438/

the tests don’t pass, but now because of those pesky IP6 ND packets.

Regards,
neale

From: "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Monday, 20 February 2017 at 11:51
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: RE: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello Neale,

It’s in review: https://gerrit.fd.io/r/#/c/4433/

Affected tests are skipped there at the moment.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Monday, February 20, 2017 12:49
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Cc: csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

Can you please share the test code, then I can reproduce the problem and debug 
it. Maybe push as a draft to gerrit and add me as a reviewer.

Thanks,
neale

From: "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Monday, 20 February 2017 at 09:41
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: RE: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello Neale,

I tested it with vpp_lite built up from the master branch. I did rebase to the 
current head (my parent is now 90c55724b583434957cf83555a084770f2efdd7a) but 
still the same issue.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Friday, February 17, 2017 17:19
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>; 
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Cc: csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

What version of VPP are you testing?

Thanks,
neale

From: <csit-dev-boun...@lists.fd.io<mailto:csit-dev-boun...@lists.fd.io>> on 
behalf of "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
<jgel...@cisco.com<mailto:jgel...@cisco.com>>
Date: Friday, 17 February 2017 at 14:48
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>" 
<csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>>
Subject: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello VPP dev team,

Usage of reset_fib API command to reset IPv6 FIB leads to incorrect entry in 
the FIB and to crash of VPP.

Could somebody have a look on Jira ticket https://jira.fd.io/browse/VPP-643, 
please?

Thanks,
Jan

From make test log:

12:14:51,710 API: reset_fib ({'vrf_id': 1, 'is_ipv6': 1})
12:14:51,712 IPv6 VRF ID 1 reset
12:14:51,712 CLI: show ip6 fib
12:14:51,714 show ip6 fib
ipv6-VRF:0, fib_index 0, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[30:15175]]
    [0] [@0]: dpo-drop ip6
fd01:4::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:44 buckets:1 uRPF:5 to:[0:0]]
    [0] [@0]: dpo-drop ip6
fd01:7::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:71 buckets:1 uRPF:5 to:[0:0]]
    [0] [@0]: dpo-drop ip6
fd01:a::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:98 buckets:1 uRPF:5 to:[0:0]]
    [0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
    [0] [@2]: dpo-receive
ipv6-VRF:1, fib_index 1, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:15 buckets:1 uRPF:13 to:[0:0]]
    [0] [@0]: dpo-drop ip6
fd01:1::/64
  UNRESOLVED
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:16 buckets:1 uRPF:14 to:[0:0]]
    [0] [@2]: dpo-receive

And later:

12:14:52,170 CLI: packet-generator enable
12:14:57,171 --- addError() TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
        ) called, err is (<type 'exceptions.IOError'>, IOError(3, 'Waiting for 
reply timed out'), <traceback object at 0x2abab83db5a8>)
12:14:57,172 formatted exception is:
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 331, in run
    testMethod()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 365, 
in test_ip6_vrf_02
    self.run_verify_test()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 322, 
in run_verify_test
    self.pg_start()
  File "/home/vpp/Documents/vpp/test/framework.py", line 398, in pg_start
    cls.vapi.cli('packet-generator enable')
  File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 169, in cli
    r = self.papi.cli_inband(length=len(cli), cmd=cli)
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 305, in 
<lambda>
    f = lambda **kwargs: (self._call_vpp(i, msgdef, multipart, **kwargs))
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 547, in 
_call_vpp
    r = self.results_wait(context)
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 395, in 
results_wait
    raise IOError(3, 'Waiting for reply timed out')
IOError: [Errno 3] Waiting for reply timed out

12:14:57,172 --- tearDown() for TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
        ) called ---
12:14:57,172 CLI: show trace
12:14:57,172 VPP subprocess died unexpectedly with returncode -6 [unknown]
12:14:57,172 --- addError() TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
        ) called, err is (<class 'hook.VppDiedError'>, VppDiedError('VPP 
subprocess died unexpectedly with returncode -6 [unknown]',), <traceback object 
at 0x2abab8427098>)
12:14:57,173 formatted exception is:
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 360, in run
    self.tearDown()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 148, 
in tearDown
    super(TestIP6VrfMultiInst, self).tearDown()
  File "/home/vpp/Documents/vpp/test/framework.py", line 333, in tearDown
    self.logger.debug(self.vapi.cli("show trace"))
  File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 167, in cli
    self.hook.before_cli(cli)
  File "/home/vpp/Documents/vpp/test/hook.py", line 138, in before_cli
    self.poll_vpp()
  File "/home/vpp/Documents/vpp/test/hook.py", line 115, in poll_vpp
    raise VppDiedError(msg)
VppDiedError: VPP subprocess died unexpectedly with returncode -6 [unknown]
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
  • Re: [vpp-dev] [cs... Neale Ranns (nranns)
    • Re: [vpp-dev... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
      • Re: [vpp... Neale Ranns (nranns)
        • Re: ... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
          • ... Neale Ranns (nranns)
            • ... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
            • ... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
              • ... Neale Ranns (nranns)
                • ... Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)

Reply via email to