Was it (NoNeg) that was showing ;-)? If it was, that means the router initiated the AFI session with its peer, but peer was not cooperative, i.e. it was not configured for VPNv4. What you most likely didn't do was enable VPNv4 session on R9's peer. In our topologies, all routers can be used as PEs - when you configure them! :-)
In hub-and-spoke environments when source and client are behind spokes, you *may* want to make sure that hub is the PIM DR. In some instances and it's likely you hit one of these, your multicast may not work otherwise. Also, note that when you ping a multicast group from a Cisco router, by default the traffic will be sourced from _all_ interfaces. -- Marko Milivojevic - CCIE #18427 (SP R&S) Senior CCIE Instructor - IPexpert On Tue, Mar 27, 2012 at 16:19, George Leslie <[email protected]>wrote: > > > > > Hi IPexperters,Was doing a lab on Proctor Labs kit, making up my own MPLS > VPN scenarios. I wish I'd paid more attention to the exact details now, but > I think I was using R9 to be a PE router. (I cannot swear it was R9 but > the detail is not important for my query). Anyway, whatever router it was, > happily accepted all the usual BGP VPNv4 unicast address family command and > LDP command, but when you looked at the "show vpvnv4.....summary" command, > it showed some weird state in the State/Prefix column. It was > (No<something>). I oogle-googled it, and it said this was down to a problem > in capabilities exchange between the routers. Both were already running > IPv4 BGP between them, using the "no bgp def ipv4" command. My query is > very simple, on Protcor Labs kit, which routers can be used as PE routers > and which should be avoided as using as PEs? I think R2, R4, R5 and R6 are > safe bets. Any others? I'm sure Marko will know this off the top of his > head. Another odd one I came acros > s as well, this time on R5. I had sparse-dense mode set up on a hub and > spoke frame relay cloud, with R2 as hub, with NMBA mode configured on the > multipoint sub interface. R2 was configured as RP for the group in > question, and R5 had proper RP mapping, learned via auto-rp. The other > spoke on the frame cloud, R4, had had its ethernet port join a group for > which it knew R2 as RP. Basically, traffic flow should have been in and out > of the hub, from one spoke to another. When I did a: ping <group> source > <lan interface> on R5, it would not source packets. Absolutely nothing was > received at the hub when I did a "debug ip mpacket". However, when I > looked on the hub, there was multicast state for the group, but not sourced > from the LAN interface, but from the frame IP address on R5. R4 received > no traffic whatsoever, but it too had correct RP mappings. Frame maps were > set up for broadcast and OSPF was running across the frame cloud. SPT > threshold was set high to stop any RPT > to SPT switchover. Is there any restriction about running multicast ping, > sourced from a given interface? The source interface was known to the OSPF > domain. Sorry I don't have configs for this lot, just wondered if anyone > had come across anything similar? Regards, George. > _______________________________________________ > For more information regarding industry leading CCIE Lab training, please > visit www.ipexpert.com > > Are you a CCNP or CCIE and looking for a job? Check out > www.PlatinumPlacement.com > > http://onlinestudylist.com/mailman/listinfo/ccie_rs > _______________________________________________ For more information regarding industry leading CCIE Lab training, please visit www.ipexpert.com Are you a CCNP or CCIE and looking for a job? Check out www.PlatinumPlacement.com http://onlinestudylist.com/mailman/listinfo/ccie_rs
