On Sat, Jun 13, 2015 at 5:12 PM, Salvatore Orlando <salv.orla...@gmail.com>
wrote:

> Hi  Russel,
>
> thanks for sharing these thoughts. I was indeed thinking as well we need to
> support this in OVN as the provider networks are a fairly basic neutron
> feature - despite being an "extensions".
> I have some comments inline. I apologise in advance for their dumbness as
> I'm still getting up to speed with OVN architecture & internals.
>
> Salvatore
>
> On 10 June 2015 at 20:13, Russell Bryant <rbry...@redhat.com> wrote:
>
> > I've been doing some thinking about OpenStack Neutron's provider
> > networks and how we might be able to support that with OVN as the
> > backend for Neutron.  Here is a start.  I'd love to hear what others
> think.
> >
> >
> > Provider Networks
> > =================
> >
> > OpenStack Neutron currently has a feature referred to as "provider
> > networks".  This is used as a way to define existing physical networks
> > that you would like to integrate into your environment.
> >
> > In the simplest case, it can be used in environments where they have no
> > interest in tenant networks.  Instead, they want all VMs hooked up
> > directly to a pre-defined network in their environment.  This use case
> > is actually popular for private OpenStack deployments.
> >
> > Neutron's current OVS agent that runs on network nodes and hypervisors
> > has this configuration entry:
> >
> >     bridge_mappings = physnet1:br-eth1,physnet2:br-eth2[...]
> >
> > This is used to name your physical networks and the bridge used to
> > access that physical network from the local node.
> >
> > Defining a provider network via the Neutron API via the neutron
> > command looks like this:
> >
> >     $ neutron net-create physnet1 --shared \
> >     > --provider:physical_network external \
> >     > --provider:network_type flat
> >
> > A provider network can also be defined with a VLAN id:
> >
> >     $ neutron net-create physnet1-101 --shared \
> >     > --provider:physical_network external \
> >     > --provider:network_type vlan \
> >     > --provider:segmentation_id 101
> >
>
> The only pedant nit I have to add here is that the 'shared' setting has
> nothing to do with provider networks, but on the other hand it is also true
> that is required for supporting the "can't be bothered by tenant networks"
> use case.
>
> >
> >
> > Provider Networks with OVN
> > --------------------------
> >
> > OVN does not currently support this use case, but it's required for
> > Neutron.  Some use cases for provider networks are potentially better
> > served using OVN gateways.  I think the "simple networking without
> > tenant networks" case is at least one strong use case worth supporting.
> >
>
> the difference between the provider network and the L2 gateway is in my
> opinion that the former is a mapping between a logical network and a
> concrete physical network (I am considering VLANs as 'physical' here for
> simplicity), whereas the L2 gateway is a "service" that inserts in the
> logical topology to provide the same functionality. In other words with the
> provider network abstraction your traffic goes always on your chosen
> physical network, with the l2 gateway abstraction your traffic stays in the
> tenant network, likely an overlay, unless packets are directed to an
> address not in the tenant network, in which case they cross the gateway.
>
> In my opinion it's just too difficult to state which abstraction is better
> for given use cases. I'd rather expose both abstractions (the provider
> network for now, and the gateway in the future), and let operators choose
> the one that suits them better.
>
>
+1, I agree with your comments Salv. I'd also argue that we should likely
do provider networks first, and then followup with L2 gateway support,
given L2 gateway is new in Kilo.


>
> >
> > One possible implementation would be to have a Neutron agent
> > that runs parallel to ovn-controller that handles this.  It would
> > perform a subset of what the current Neutron OVS agent does but would
> > also duplicate a lot of what OVN does.  The other option is to have
> > ovn-controller implement it.
>
>
> > There are significant advantages to implementing this in OVN.  First, it
> > simplifies the deployment.  It saves the added complexity of a parallel
> > control plane for the neutron agents running beside ovn-controller.
> >
>
> This alonr would convince me to implement the solution with OVN; for
> instance the L3 agents should then be aware of the fact that interfaces
> might be plugged in multiple bridges, VIF plugging might also differ and
> therefore more work might be needed with port bindings, and finally you'd
> have provider networks secured in the "neutron way", whereas tenant
> networks would be secured in the "OVN way"
> Nevertheless, this might be a case where possibly ML2 could be leveraged
> (with or without tweaks) to ensure that the OVS driver implements provider
> networks, whereas the OVN driver implements tenant networks.
>
>
> > Second, much of the flows currently programmed by ovn-controller
> > would be useful for provider networks.  We still want to implement port
> > security and ACLs (security groups).  The major difference is that
> > unless the packet is dropped by egress port security or ACLs, it should
> > be sent out the physical network and forwarded by the physical network
> > infrastructure.
> >
> > ovn-controller would need the equivalent of the current OVS agent's
> > bridge_mappings configuration option.  ovn-controller is
> > currently configured by setting values in the local Open_vSwitch
> > ovsdb database, so a similar configuration entry could be provided
> > there:
> >
> >     $ ovs-vsctl set open .
> > external-ids:bridge_mappings=physnet1:br-eth1,physnet2:br-eth2
> >
> > ovn-controller would expect that the environment was pre-configured
> > with these bridges.  It would create ports on br-int that connect to
> > each bridge.
> >
> > These networks also need to be reflected in the OVN databases so that an
> > OVN logical port can be attached to a provider network.  In
> > OVN_Northbound, we could add a new table called Physical_Switch
> > that a logical port could be attached to instead of Logical_Switch.
> >
>
> The provider network is still a logical network. I am not able to see a
> reason for having to attach a logical port to a physical switch. Can you
> explain?
> It seems that you are trying to describe the physical network the logical
> network maps to. This makes sense, but since in Neutron then we also have
> the "multi-provider" extension, which is a generalization of the provider
> network concepts, would it make sense to consider some sort of logical
> network bindings? These bindings might express for the time being vlan
> mappings, but in the future they could be used to specify, for instance,
> VTEPs or the encap type the tenant network implements. I know this might be
> nonsense, but at first glance it seems a viable alternative.
>
>
> > The ``Physical_Switch`` schema could be the same as Logical_Switch
> > except for the addition of 'type' (flat or vlan) and 'tag' (the VLAN
> > id for type=vlan)::
> >
> >          "Physical_Switch": {
> >              "columns": {
> >                  "name": {"type": "string"},
> >                  "type": {"type": "string"},
> >                  "tag": {
> >                       "type": {"key": {"type": "integer",
> >                                        "minInteger": 0,
> >                                        "maxInteger": 4095},
> >                  "router_port": {"type": {"key": {"type": "uuid",
> >                                                   "refTable":
> > "Logical_Router_Port",
> >                                                   "refType": "strong"},
> >                                           "min": 0, "max": 1}},
> >                  "external_ids": {
> >                      "type": {"key": "string", "value": "string",
> >                                "min": 0, "max": "unlimited"}}}},
>
>
> It seems that this structure replaces the LogicalSwitch for provider
> networks. While this makes sense from one side, it might not play nicely
> with Neutron integration. the provider info for a neutron network can
> indeed be updated [1], [2] - thus allowing to make a regular tenant network
> a provider network. As usual Neutron is inconsistent in this as well: you
> can transform a regular network into a provider network, but you cannot do
> the opposite.
>
> >
> > Currently, a Logical_Port has a lswitch column.  It would also
> > need a pswitch column for when the port is attached to a
> > Physical_Switch:
> >
> >           "Logical_Port": {
> >               "columns": {
> >                   "lswitch": {"type": {"key": {"type": "uuid",
> >                                                "refTable":
> > "Logical_Switch",
> >                                                "refType": "strong"}}},
> >                   "pswitch": {"type": {"key": {"type": "uuid",
> >                                                "refTable":
> > "Physical_Switch",
> >                                                "refType": "strong"}}},
> >                   ...
> >
>
> If I get your proposal correct, pswtich and lswtich are mutually exclusive,
> or can a port be attached to a pswitch and a lswtich at the same time?
>
>
> >
> > This would have an impact on the OVN_Southbound database, as well.
> > No schema changes have been identified.  Entries in the Bindings
> > table would be the same, except that the UUID in the
> > logical_datapath column would refer to a Physical_Switch instead
> > of a Logical_Switch.  The contents of the Pipeline  and the
> > flows set up by ovn-controller would need to change.
> >
>
> Would we need chassis entries also for the bridges implementing the mapping
> with physical networks, or do we consider them to be outside of the OVN
> realm?
>
>
> > In the case of a Physical_Switch instead of a Logical_Switch,
> > one major difference is that output to a port that is non-local is just
> > sent out to the physical network bridge instead of a tunnel.
> >
> > Another difference is that packets for an unknown destination should
> > also be sent out to the physical network bridge instead of dropped.
> >
> >
> > If there is some consensus that supporting something like this makes
> > sense, I'm happy to take on the next steps, which would include a more
> > detailed proposal that includes Pipeline and flow details, as well as
> > the implementation.
> >
> > Thanks,
> >
>
> [1]
>
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/multiprovidernet.py#n74
> [2]
>
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/providernet.py#n32
>
>
> >
> > --
> > Russell Bryant
> > _______________________________________________
> > dev mailing list
> > dev@openvswitch.org
> > http://openvswitch.org/mailman/listinfo/dev
> >
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to