On Tue, Mar 16, 2021, at 14:45, Luis Tomas Bolivar wrote:
> Of course we are fully open to redesign it if there is a better approach! And 
> that was indeed the intention when linking to the current efforts, figure out 
> if that was a "valid" way of doing it, and how it can be improved/redesigned. 
> The main idea behind the current design was not to need modifications to core 
> OVN as well as to minimize the complexity, i.e., not having to implement 
> another kind of controller for managing the extra OF flows.
> 
> Regarding the metadata/localport, I have a couple of questions, mainly due to 
> me not knowing enough about ovn/localport:
> 1) Isn't the metadata managed through a namespace? And the end of the day 
> that is also visible from the hypervisor, as well as the OVS bridges

Indeed, that's true - you can reach tenant's network from ovnmeta- namespace 
(where metadata proxy lives), however from what I remember while testing you 
can only establish connection to VMs running on the same hypervisor. Granted, 
this is less about "hardening" per se - any potential takeover of the 
hypervisor is probably giving the attacker enough tools to own entire overlay 
network anyway. Perhaps it's just giving me a bad feeling, where what should be 
an isolated public facing network can be reached from hypervisor without going 
through expected network path.

> 2) Another difference is that we are using BGP ECMP and therefore not 
> associating any nic/bond to br-ex, and that is why we require some 
> rules/routes to redirect the traffic to br-ex.

That's an interesting problem  - I wonder if that can even be done in OVS today 
(for example with multipath action) and how would ovs handle incoming traffic 
(what flows are needed to handle that properly). I guess someone with OVS 
internals knowledge would have to chime in on this one.

> Thanks for your input! Really appreciated!
> 
> Cheers,
> Luis
> 
> On Tue, Mar 16, 2021 at 2:22 PM Krzysztof Klimonda 
> <kklimo...@syntaxhighlighted.com> wrote:
>> __
>> Would it make more sense to reverse this part of the design? I was thinking 
>> of having each chassis its own IPv4/IPv6 address used for next-hop in 
>> announcements and OF flows installed to direct BGP control packets over to 
>> the host system, in a similar way how localport is used today for neutron's 
>> metadata service (although I'll admit that I haven't looked into how this 
>> integrates with dpdk and offload).
>> 
>> This way we can also simplify host's networking configuration as extra 
>> routing rules and arp entries are no longer needed (I think it would be 
>> preferable, from security perspective, for hypervisor to not have a direct 
>> access to overlay networks which seems to be the case when you use rules 
>> like that).
>> 
>> --
>>   Krzysztof Klimonda
>>   kklimo...@syntaxhighlighted.com
>> 
>> 
>> 
>> On Tue, Mar 16, 2021, at 13:56, Luis Tomas Bolivar wrote:
>>> Hi Krzysztof,
>>> 
>>> On Tue, Mar 16, 2021 at 12:54 PM Krzysztof Klimonda 
>>> <kklimo...@syntaxhighlighted.com> wrote:
>>>> __
>>>> Hi Luis,
>>>> 
>>>> I haven't yet had time to give it a try in our lab, but from reading your 
>>>> blog posts I have a quick question. How does it work when either DPDK or 
>>>> NIC offload is used for OVN traffic? It seems you are (de-)encapsulating 
>>>> traffic on chassis nodes by routing them through kernel - is this current 
>>>> design or just an artifact of PoC code?
>>> 
>>> You are correct, that is a limitation as we are using kernel routing for 
>>> N/S traffic, so DPDK/NIC offloading could not be used. That said, the E/W 
>>> traffic still uses the OVN overlay and Geneve tunnels.
>>> 
>>> 
>>>> 
>>>> 
>>>> --
>>>>   Krzysztof Klimonda
>>>>   kklimo...@syntaxhighlighted.com
>>>> 
>>>> 
>>>> 
>>>> On Mon, Mar 15, 2021, at 11:29, Luis Tomas Bolivar wrote:
>>>>> Hi Sergey, all,
>>>>> 
>>>>> In fact we are working on a solution based on FRR where a (python) agent 
>>>>> reads from OVN SB DB (port binding events) and triggers FRR so that the 
>>>>> needed routes gets advertised. It leverages kernel networking to redirect 
>>>>> the traffic to the OVN overlay, and therefore does not require any 
>>>>> modifications to ovn itself (at least for now). The PoC code can be found 
>>>>> here: https://github.com/luis5tb/bgp-agent
>>>>> 
>>>>> And there is a series of blog posts related to how to use it on OpenStack 
>>>>> and how it works:
>>>>> - OVN-BGP agent introduction: 
>>>>> https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/
>>>>> - How to set ip up on DevStack Environment: 
>>>>> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/
>>>>> - In-depth traffic flow inspection: 
>>>>> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/
>>>>> 
>>>>> We are thinking that possible next steps if community is interested could 
>>>>> be related to adding multitenancy support (e.g., through EVPN), as well 
>>>>> as defining what could be the best API to decide what to expose through 
>>>>> BGP. It would be great to get some feedback on it!
>>>>> 
>>>>> Cheers,
>>>>> Luis
>>>>> 
>>>>> On Fri, Mar 12, 2021 at 8:09 PM Dan Sneddon <dsned...@redhat.com> wrote:
>>>>>> 
>>>>>> 
>>>>>> On 3/10/21 2:09 PM, Sergey Chekanov wrote:
>>>>>> > I am looking to Gobgp (BGP implementation in Go) + go-openvswitch for 
>>>>>> > communicate with OVN Northbound Database right now, but not sure yet.
>>>>>> > FRR I think will be too heavy for it...
>>>>>> > 
>>>>>> > On 10.03.2021 05:05, Raymond Burkholder wrote:
>>>>>> >> You could look at it from a Free Range Routing perspective.  I've 
>>>>>> >> used 
>>>>>> >> it in combination with OVS for layer 2 and layer 3 handling.
>>>>>> >>
>>>>>> >> On 3/8/21 3:40 AM, Sergey Chekanov wrote:
>>>>>> >>> Hello!
>>>>>> >>>
>>>>>> >>> Is there are any plans for support BGP EVPN for extending virtual 
>>>>>> >>> networks to ToR hardware switches?
>>>>>> >>> Or why it is bad idea?
>>>>>> >>>
>>>>>> >>> _______________________________________________
>>>>>> >>> discuss mailing list
>>>>>> >>> disc...@openvswitch.org
>>>>>> >>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>>>>> >>
>>>>>> > 
>>>>>> > _______________________________________________
>>>>>> > discuss mailing list
>>>>>> > disc...@openvswitch.org
>>>>>> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>>>>> > 
>>>>>> 
>>>>>> FRR is delivered as a set of daemons which perform specific functions. 
>>>>>> If you only need BGP functionality, you can just run bgpd. The zebra 
>>>>>> daemon adds routing exchange between BGP and the kernel. The vtysh 
>>>>>> daemon provides a command-line interface to interact with the FRR 
>>>>>> processes. There is also a bi-directional forwarding detection (BFD) 
>>>>>> daemon that can be run to detect unidirectional forwarding failures. 
>>>>>> Other daemons provide other services and protocols. For this reason, I 
>>>>>> felt that it was lightweight enough to just run a few daemons in a 
>>>>>> container.
>>>>>> 
>>>>>> A secondary concern for my use case was support on Red Hat Enterprise 
>>>>>> Linux, which will be adding FRR to the supported packages shortly.
>>>>>> 
>>>>>> I'm curious to hear any input that anyone has on FRR compared with GoBGP 
>>>>>> and other daemons. Please feel free to respond on-list if it involves 
>>>>>> OVS, or off-list if not. Thanks.
>>>>>> 
>>>>>> -- 
>>>>>> Dan Sneddon         |  Senior Principal Software Engineer
>>>>>> dsned...@redhat.com |  redhat.com/cloud
>>>>>> dsneddon:irc        |  @dxs:twitter
>>>>>> 
>>>>>> _______________________________________________
>>>>>> discuss mailing list
>>>>>> disc...@openvswitch.org
>>>>>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>>>> 
>>>>> 
>>>>> -- 
>>>>> LUIS TOMÁS BOLÍVAR
>>>>> Principal Software Engineer
>>>>> Red Hat
>>>>> Madrid, Spain
>>>>> ltoma...@redhat.com   
>>>>>  
>>>>> _______________________________________________
>>>>> discuss mailing list
>>>>> disc...@openvswitch.org
>>>>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>>>> 
>>>> 
>>>> _______________________________________________
>>>> discuss mailing list
>>>> disc...@openvswitch.org
>>>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>> 
>>> 
>>> -- 
>>> LUIS TOMÁS BOLÍVAR
>>> Principal Software Engineer
>>> Red Hat
>>> Madrid, Spain
>>> ltoma...@redhat.com   
>>>  
>> 
> 
> 
> -- 
> LUIS TOMÁS BOLÍVAR
> Principal Software Engineer
> Red Hat
> Madrid, Spain
> ltoma...@redhat.com   
>  
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to