Ryan, I have been working with the L3 sub team in this direction. Progress has been slow because of other priorities but we have made some. I have written a blueprint detailing some changes needed to the code to enable the flexibility to one day run glaring ups on an l3 routed network [1]. Jaime has been working on one that integrates ryu (or other speakers) with neutron [2]. Dvr was also a step in this direction.
I'd like to invite you to the l3 weekly meeting [3] to discuss further. I'm very happy to see interest in this area and have someone new to collaborate. Carl [1] https://review.openstack.org/#/c/88619/ [2] https://review.openstack.org/#/c/125401/ [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam On Dec 3, 2014 4:04 PM, "Ryan Clevenger" <ryan.cleven...@rackspace.com> wrote: > Hi, > > At Rackspace, we have a need to create a higher level networking service > primarily for the purpose of creating a Floating IP solution in our > environment. The current solutions for Floating IPs, being tied to plugin > implementations, does not meet our needs at scale for the following reasons: > > 1. Limited endpoint H/A mainly targeting failover only and not > multi-active endpoints, > 2. Lack of noisy neighbor and DDOS mitigation, > 3. IP fragmentation (with cells, public connectivity is terminated inside > each cell leading to fragmentation and IP stranding when cell CPU/Memory > use doesn't line up with allocated IP blocks. Abstracting public > connectivity away from nova installations allows for much more efficient > use of those precious IPv4 blocks). > 4. Diversity in transit (multiple encapsulation and transit types on a per > floating ip basis). > > We realize that network infrastructures are often unique and such a > solution would likely diverge from provider to provider. However, we would > love to collaborate with the community to see if such a project could be > built that would meet the needs of providers at scale. We believe that, at > its core, this solution would boil down to terminating north<->south > traffic temporarily at a massively horizontally scalable centralized core > and then encapsulating traffic east<->west to a specific host based on the > association setup via the current L3 router's extension's 'floatingips' > resource. > > Our current idea, involves using Open vSwitch for header rewriting and > tunnel encapsulation combined with a set of Ryu applications for management: > > https://i.imgur.com/bivSdcC.png > > The Ryu application uses Ryu's BGP support to announce up to the Public > Routing layer individual floating ips (/32's or /128's) which are then > summarized and announced to the rest of the datacenter. If a particular > floating ip is experiencing unusually large traffic (DDOS, slashdot effect, > etc.), the Ryu application could change the announcements up to the Public > layer to shift that traffic to dedicated hosts setup for that purpose. It > also announces a single /32 "Tunnel Endpoint" ip downstream to the > TunnelNet Routing system which provides transit to and from the cells and > their hypervisors. Since traffic from either direction can then end up on > any of the FLIP hosts, a simple flow table to modify the MAC and IP in > either the SRC or DST fields (depending on traffic direction) allows the > system to be completely stateless. We have proven this out (with static > routing and flows) to work reliably in a small lab setup. > > On the hypervisor side, we currently plumb networks into separate OVS > bridges. Another Ryu application would control the bridge that handles > overlay networking to selectively divert traffic destined for the default > gateway up to the FLIP NAT systems, taking into account any configured > logical routing and local L2 traffic to pass out into the existing overlay > fabric undisturbed. > > Adding in support for L2VPN EVPN ( > https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN > Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to > the Ryu BGP speaker will allow the hypervisor side Ryu application to > advertise up to the FLIP system reachability information to take into > account VM failover, live-migrate, and supported encapsulation types. We > believe that decoupling the tunnel endpoint discovery from the control > plane (Nova/Neutron) will provide for a more robust solution as well as > allow for use outside of openstack if desired. > > ________________________________________ > > Ryan Clevenger > Manager, Cloud Engineering - US > m: 678.548.7261 > e: ryan.cleven...@rackspace.com > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev