Hi ,

We need to scale DNAT address learning and have found G-ARP to be a
continual issue for us.

We have some hypervisors that can host up to 100+ Logical Switch Ports
bound to SR-IOV functions.  The CoPP on the Top of Rack switch may
under some conditions (like large numbers of VIF being inserted into
the br-int bridge - could cause GARP to flood the switch and therefore
the CoPP drops the frame).

I am thinking about moving to a BGP based approach that triggers the
creation of an EVPN Type 5 route in our data center fabric but using a
local GoBGP agent and triggering address creation over a GRPC call.

Today our topology is simple:
We use a localnet port with a vlan120 tag in a provider bridge called
br-provider, which extends L2 from the hypervisor to the TOR.  The TOR
has a L3 interface for Vlan120 , with any anycast gateway ,
172.16.0.1, all logical routers have a 0/0 route pointing to
172.16.0.1.  When the OVN controller sends a G-ARP the EVPN ARP
snooping generates a Type 2 EVPN route and sends a /32 route for the
DNAT to our fabric / WAN border.  When the GARP is sent and address
learning is completed , everything works well , it just sometimes we
either don't see a GARP from the OVN controller or we are hitting the
CoPP limit.

What I would like to do is as follows:
Add a new BGP peering between the Hypervisor and the TOR , this will
be on a new VIF called bgp-peering-if and be tagged with vlan5 , my
lab case will have p2p peering with the TOR on 192.168.55.0/31
Terminate the L3 interface for VLAN120 on each hypervisor i.e. add an
interface called ovn-localnet-if to the provider bridge, this
interface will have 172.16.0.1 (the GW for Vlan120).

Here is where I'm struggling.

If traffic moves from a customer overlay into the provider bridge ,
should it be on a localnet port , localport or externalport .
I don't have L3 routing on the br-provider bridge - if I do then I
need to make a routing decision to send traffic to 0/0 via the BGP
next hop.  If I do this will the offload fail and the kernel route the
packet ? (I'm using the CX6 card with HW offload enabled).

Return traffic , how will the traffic returning from the physical
network be handled by the br-provider bridge ?  In the packet capture
below I am showing traffic routing from the internet to my lab host
91.106.221.181 , you can see the traffic come in on the physical
interface with vlan tag = 5 , my BGP peering interface

16:58:38.078660 9c:05:91:22:e2:fd > 72:58:a5:8f:1b:65, ethertype
802.1Q (0x8100), length 102: vlan 5, p 0, ethertype IPv4 (0x0800),
(tos 0x0, ttl 45, id 49114, offset 0, flags [none], proto ICMP (1),
length 84)
    50.175.168.194 > 91.106.221.181: ICMP echo request, id 30241, seq
18, length 64

Somehow I need the provider bridge to send this traffic into a
customer overlay .

Can anyone comment on this approach ? If it seems viable , should I be
writing openflow rules for the provider bridge to handle this scenario
?

Thanks

Gav
_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to