Hi guys,

In the mfib one ingress interface Correspond to multiple egress interface. Then 
how can I configure this.
In addtion, a multicast message  next hop will be 'next0 = 
IP4_INPUT_NEXT_LOOKUP_MULTICAST' and lookup in mfib:

if (PREDICT_FALSE (ip4_address_is_multicast (&ip0->dst_address)))  //This 
multicast message include 224.0.0.0/8
{ 
arc0 = lm->mcast_feature_arc_index; 
next0 = IP4_INPUT_NEXT_LOOKUP_MULTICAST; 
} 
else 
{ 
arc0 = lm->ucast_feature_arc_index; 
next0 = IP4_INPUT_NEXT_LOOKUP; 
if (PREDICT_FALSE (ip0->ttl < 1)) 
error0 = IP4_ERROR_TIME_EXPIRED; 
}

In the fib I can configure a route about 224.0.0.0/8 .In the mfib I can also 
configure a route about 224.0.0.5/32 .
When I configure a route about 224.0.0.5/32 in mfib. The multicast message 
start with 224 will lookup in mfib. But it will be dropped.
Is that a mistake?
DBGvpp# show ip fib 
ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto 
0.0.0.0/0 
unicast-ip4-chain 
[@0]: dpo-load-balance: [index:0 buckets:1 uRPF:0 to:[0:0]] 
[0] [@0]: dpo-drop ip4 
0.0.0.0/32 
unicast-ip4-chain 
[@0]: dpo-load-balance: [index:1 buckets:1 uRPF:1 to:[0:0]] 
[0] [@0]: dpo-drop ip4 
1.1.1.0/24 
unicast-ip4-chain 
[@0]: dpo-load-balance: [index:12 buckets:1 uRPF:11 to:[0:0]] 
[0] [@4]: ipv4-glean: host-eth2 
1.1.1.1/32 
unicast-ip4-chain 
[@0]: dpo-load-balance: [index:13 buckets:1 uRPF:12 to:[0:0]] 
[0] [@2]: dpo-receive: 1.1.1.1 on host-eth2 
224.0.0.0/8 
unicast-ip4-chain 
[@0]: dpo-load-balance: [index:3 buckets:1 uRPF:3 to:[0:0]] 
[0] [@0]: dpo-drop ip4 
240.0.0.0/8 
unicast-ip4-chain 
[@0]: dpo-load-balance: [index:2 buckets:1 uRPF:2 to:[0:0]] 
[0] [@0]: dpo-drop ip4 
255.255.255.255/32 
unicast-ip4-chain 
[@0]: dpo-load-balance: [index:4 buckets:1 uRPF:4 to:[0:0]] 
[0] [@0]: dpo-drop ip4

In my opinion ,the multicast message which start with 224 is usually the 
porotocol message, should not have the same process with  the multicast message 
 start with 225-239.
Please help me point out the problem of my idea. 

Thanks,
xyxue


_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to