Yes, and specifically, at the head the mroute has only an accepting interface, 
at the tail the mroute has only an RPF-ID.

/neale

From: Nagaprabhanjan Bellari <nagp.li...@gmail.com>
Date: Monday, 10 July 2017 at 05:56
To: "Neale Ranns (nranns)" <nra...@cisco.com>
Cc: vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Ok, so:
An mroute should be associated with both an RPF-id as well as an accept 
interface. For the head end ingress IP multicast traffic, the accept interface 
is used as a check and for the tail end mpls terminating traffic - the rpf_id 
associated with the mroute should match the rpf_id associated with the label.
Is this correct?
Thanks,
-nagp

On Mon, Jul 10, 2017 at 12:22 AM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:

Hi nagp,

This is the head end. So there should not be an RPF-ID associated with the mfib 
entry. Instead, there should be an accepting path for your phy interface, i.e;

   ip mroute table 1 add 1.2.3.4 229.1.1.2 via FortyGigabitEthernet-6/0/1.1 
Accept

regards,
neale

From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Sunday, 9 July 2017 at 18:54

To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Please find the output of "show ip mfib table 1" below:

--
DBGvpp# show ip mfib table 1
ipv4-VRF:1, fib_index 1
(*, 0.0.0.0/0<http://0.0.0.0/0>):  flags:D,
  Interfaces:
  RPF-ID:0
  multicast-ip4-chain
  [@0]: dpo-drop ip4
(1.2.3.4, 229.1.1.2/32<http://229.1.1.2/32>):  flags:AA,
  Interfaces:
   mpls-tunnel1: Forward,
  RPF-ID:1
  multicast-ip4-chain
  [@1]: dpo-replicate: [index:20 buckets:1 to:[15:1260]]
    [0] [@2]: ipv4-mcast-midchain: DELETED:578217307
        stacked-on:
          [@3]: dpo-replicate: [index:17 buckets:1 to:[15:1260]]
            [0] [@1]: mpls-label:[3]:[201:255:0:neos][501:255:0:eos]
                [@1]: mpls via 20.1.1.2 FortyGigabitEthernet-6/0/0.1: 
000000000105000000040101810000148847
--
Ignore the "AA" flag for now, because that is my work around, apart from this, 
I have rpf-id set as 1 for this route. The ingress interface is a normal sub 
interface - which is FortyGigabitEthernet-6/0/1.1 (different from the above) 
with a vlan tag.
When a packet is received on FortyGigabitEthernet-6/0/1.1 having the above (S, 
G), I see that from mfib_forward_lookup, packets unconditionally go to 
mfib_forward_rpf and that's where I am seeing an issue with rpf_id. Please let 
me know if I am missing something on FortyGigabitEthernet-6/0/1.1.

Thanks,
-nagp

On Sun, Jul 9, 2017 at 10:22 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:

Hi nagp,

If the packets arrive IP on an IP interface, then this is not the use-case for 
RPF-ID.
Show me the mfib route and tell me the ingress interface.

/neale

From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Sunday, 9 July 2017 at 11:13

To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
In my case, this is what is happening - IP MC packets are arriving on a normal 
interface, going from ip_input_inline to mfib_forward_lookup to 
mfib_forward_rpf and there it is getting dropped because of rpf_id mismatch.
the mpls disposition node is not even figuring the path - what could I be 
missing?
-nagp

On Sun, Jul 9, 2017 at 2:45 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:

Hi nagp,

The RPF-ID should be assigned to an LSP’s path, and to the mroute, at the tail 
route not at the head.
IP multicast forwarding requires RPF checks to prevent loops, so at the tail we 
need to RPF - this is done against the interface on which the packet arrives. 
But in this case the only interface is the physical. Now technically we could 
use that physical to RPF against, but the physical belongs to the core 
‘underlay’ and we are taking about sources, receivers and routes in the VPN 
‘overlay’ and to mix the two would be unfortunate. So instead, the scheme is to 
use the LSP on which the packet arrives as the ‘interface’ against which to 
RPF. But in this case the LSP has no associated SW interface. We’ve choices 
then; 1) create a SW interface, which would not scale too well, or 2) pretend 
we have one and call it an RPF-ID. So at the tail as packets egress the LSP 
they are tagged as having come ingressed with PFR-ID and that is checked in the 
subsequent mfib forwarding.

The RPF-ID value is assigned and used only at the tail, so no head-to-tail 
signalling thereof is required.

Hth,
neale



From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Sunday, 9 July 2017 at 03:55

To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Sure, I will push this and other multicast options in the CLIs shortly. 
Meanwhile, here is the output from gdb:

--
(gdb) p mpls_disp_dpo_pool[0]
$1 = {mdd_dpo = {dpoi_type = 27, dpoi_proto = DPO_PROTO_IP4, dpoi_next_node = 
1, dpoi_index = 4}, mdd_payload_proto = DPO_PROTO_IP4,
  mdd_rpf_id = 0, mdd_locks = 1}
--
I still am not able to understand, what has rpf_id on the tail node to do with 
the rpf_id assigned to an interface on the head node. :-\
Thanks,
-nagp

On Sat, Jul 8, 2017 at 11:20 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:

Hi nagp,

We need to find out the value of the RPF-ID that’s stored in the 
mpls-disposition DPO. That’s not displayed below. So two options;

1)       We can tell from the output that it’s index #0, so hook up gdb and do: 
‘print mpls_disp_dpo_pool[0]’

2)       Modify format_mpls_disp_dpo to also print mdd->mdd_rpf_id if it’s 
non-zero.    Be nice if this patch was up-streamed ☺

Thanks
/neale



From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Saturday, 8 July 2017 at 17:55
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale,
Here is the output of "show mpls fib 501" on the tail node (encapsulation is 
happening at the head node where rpf_id is set as 0, JFYI)

--
501:eos/21 fib:0 index:61 locks:2
  src:API  refs:1 flags:attached,multicast,
    index:78 locks:2 flags:shared, uPRF-list:62 len:0 itfs:[]
      index:122 pl-index:78 ipv4 weight=1 deag:  oper-flags:resolved, 
cfg-flags:attached,rpf-id,
       [@0]: dst-address,multicast lookup in ipv4-VRF:1

 forwarding:   mpls-eos-chain
  [@0]: dpo-replicate: [index:16 buckets:1 to:[0:0]]
    [0] [@1]: mpls-disposition:[0]:[ip4]
        [@1]: dst-address,multicast lookup in ipv4-VRF:1
--
Would be glad to provide any other information.

Thanks,
-nagp

On Sat, Jul 8, 2017 at 6:55 PM, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:
Hi nagp,

vnet_buffer(b0)->ip.rpf_id is set in mpls_label_disposition_inline.
Can you show me the MPLS route at the tail again: ‘sh mpls fib 501’

/neale

From: Nagaprabhanjan Bellari <nagp.li...@gmail.com<mailto:nagp.li...@gmail.com>>
Date: Saturday, 8 July 2017 at 14:05

To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] A few questions regarding mcast fib

Hi Neale! Sorry for a late reply.
You are right, the DELETED flag does not seem to have any impact w.r.t 
forwarding. It goes through fine i.e the multicast packets get encapsulated and 
sent across.
I am not able to see where does the vnet_buffer(b0)->ip.rpf_id - is assigned. 
Because, the rpf_id associated with the route is not matching with the incoming 
packet's vnet_buffer(b0)->ip.rpf_id (which is always zero). Because of that, 
the packets are getting dropped. I have worked around by setting "accept all 
interface" flag on the route for now, but I am sure that's not the right way to 
do.
Many thanks!
-nagp





_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to