Hi Ramu,
On Tue, Apr 19, 2016 at 8:12 AM, Ramu Ramamurthy <ramu.ramamur...@gmail.com> wrote: > On Mon, Apr 18, 2016 at 2:55 PM, Aaron Rosen <aaronoro...@gmail.com> > wrote: > > I like this idea as well. > > > > The one question I have with is is how we should determine which ip > address > > to select for the 'distributed' port? > > Aaron, Thanks for your review, and feedback. > > We can use the dhcp-port (and its IP address) for the distributed-port. > > The current native-dhcp proposal > (http://openvswitch.org/pipermail/dev/2016-April/069787.html) > assumes that a dhcp-server ip-address "server_id" is defined on the subnet. > > action=(dhcp_offer(offerip = 10.0.0.2, router = 10.0.0.1, > server_id = 10.0.0.2, mtu = 1300, lease_time = 3600, > > For openstack, This means that a DHCP port has been created on that > subnet in neutron. > In the absence of a dhcp-agent, the neutron-ovn-plugin would have to > auto-create the > dhcp-port in neutron upon creation of a subnet, and then use that > port's IP address as > the "server_id" when it programs the "dhcp-options" > column of the Logical_Switch table. > > The pros of the distributed-port approach is that a) HA is not needed, > b) it runs the existing > neutron-metadata-proxy/neutron-metadata-agent as-is, c) In the future, > we could remove the > neutron-metadata-agent also, by (nova-compute) configuring the > instance-id and tenant-id as external-ids > of the VM's ovs interface - hence not need to run any neutron-agents at > all. > The drawbacks include creation of namespaces and metadata-proxy > processes on each hypervisor. > > > Cool, makes sense to me. > > If we want to avoid creating the network name spaces and > > running the http proxy on each hypervisor is if we took a similar > approach > > that openstack uses for handling dhcp/metadata requests. > > When a subnet is created we could have the neutron-ovn-plugin notify a > > metadata agent which would create a port on the given subnet for the > logical > > network. Then, to get instances to route its metadata traffic to this > > logical port we could have ovn distribute an additional host-route via > dhcp > > (using option 121). Similar to what you are proposing. > > > > > > I.e: So for example if someone created a network/subnet. > > > > In the ovn plugin we can signal the metadata agent to create a port on > that > > network. Then, for every port that is created on this network we would > > distribute a hostroute of 169.254.169.254/32 via <metadata-port>; Then, > > we'd have the metadata agent just run there which would answer these meta > > data requests and route them back. > > > > One down side to this solution is that this metadata agent would need to > be > > made HA in some way. In your current solution if the metadata agent > crashes > > or something the failure is isolated to the hypervisor. That said, this > type > > of HA seems like it can be implemented in at least an active passive > > solution easily enough. > > > > Thoughts? > > > > Your proposal is an alternative solution - which involves changes only to > the > neutron components (and no changes in ovn ?). Would there be only one > modified neutron-metadata-agent in an active-passive configuration serving > all > the VMs ? If there are multiple agents, would you need agent-scheduling to > assign networks to agents ? > > Could you share more details of the approach ? > Right, with this approach I don't believe we would need any additional changes in ovn besides the ability to specify host routes via dhcp. For this solution we'd need to modify the neutron-metadata-agent to work in this way. Currently, it has this functionality though it's coupled together with the dhcp-agent (which we wouldn't want). In order to support HA you would need to run multiple agents and the HA would use a similar agent-scheduling method that neutron currently uses to map dhcp-agents to networks. Also, the HA for this would need to be implemented as active-passive I believe. Personally, I do prefer your solution as the HA solution is more elegant as it runs on each HV. That said, if having the namespaces on the hypervisor nodes is a deal breaker than this would be an alternative solution to that. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev