I've been doing some thinking about OpenStack Neutron's provider
networks and how we might be able to support that with OVN as the
backend for Neutron.  Here is a start.  I'd love to hear what others think.


Provider Networks
=================

OpenStack Neutron currently has a feature referred to as "provider
networks".  This is used as a way to define existing physical networks
that you would like to integrate into your environment.

In the simplest case, it can be used in environments where they have no
interest in tenant networks.  Instead, they want all VMs hooked up
directly to a pre-defined network in their environment.  This use case
is actually popular for private OpenStack deployments.

Neutron's current OVS agent that runs on network nodes and hypervisors
has this configuration entry:

    bridge_mappings = physnet1:br-eth1,physnet2:br-eth2[...]

This is used to name your physical networks and the bridge used to
access that physical network from the local node.

Defining a provider network via the Neutron API via the neutron
command looks like this:

    $ neutron net-create physnet1 --shared \
    > --provider:physical_network external \
    > --provider:network_type flat

A provider network can also be defined with a VLAN id:

    $ neutron net-create physnet1-101 --shared \
    > --provider:physical_network external \
    > --provider:network_type vlan \
    > --provider:segmentation_id 101


Provider Networks with OVN
--------------------------

OVN does not currently support this use case, but it's required for
Neutron.  Some use cases for provider networks are potentially better
served using OVN gateways.  I think the "simple networking without
tenant networks" case is at least one strong use case worth supporting.

One possible implementation would be to have a Neutron agent
that runs parallel to ovn-controller that handles this.  It would
perform a subset of what the current Neutron OVS agent does but would
also duplicate a lot of what OVN does.  The other option is to have
ovn-controller implement it.

There are significant advantages to implementing this in OVN.  First, it
simplifies the deployment.  It saves the added complexity of a parallel
control plane for the neutron agents running beside ovn-controller.
Second, much of the flows currently programmed by ovn-controller
would be useful for provider networks.  We still want to implement port
security and ACLs (security groups).  The major difference is that
unless the packet is dropped by egress port security or ACLs, it should
be sent out the physical network and forwarded by the physical network
infrastructure.

ovn-controller would need the equivalent of the current OVS agent's
bridge_mappings configuration option.  ovn-controller is
currently configured by setting values in the local Open_vSwitch
ovsdb database, so a similar configuration entry could be provided
there:

    $ ovs-vsctl set open .
external-ids:bridge_mappings=physnet1:br-eth1,physnet2:br-eth2

ovn-controller would expect that the environment was pre-configured
with these bridges.  It would create ports on br-int that connect to
each bridge.

These networks also need to be reflected in the OVN databases so that an
OVN logical port can be attached to a provider network.  In
OVN_Northbound, we could add a new table called Physical_Switch
that a logical port could be attached to instead of Logical_Switch.
The ``Physical_Switch`` schema could be the same as Logical_Switch
except for the addition of 'type' (flat or vlan) and 'tag' (the VLAN
id for type=vlan)::

         "Physical_Switch": {
             "columns": {
                 "name": {"type": "string"},
                 "type": {"type": "string"},
                 "tag": {
                      "type": {"key": {"type": "integer",
                                       "minInteger": 0,
                                       "maxInteger": 4095},
                 "router_port": {"type": {"key": {"type": "uuid",
                                                  "refTable":
"Logical_Router_Port",
                                                  "refType": "strong"},
                                          "min": 0, "max": 1}},
                 "external_ids": {
                     "type": {"key": "string", "value": "string",
                               "min": 0, "max": "unlimited"}}}},

Currently, a Logical_Port has a lswitch column.  It would also
need a pswitch column for when the port is attached to a
Physical_Switch:

          "Logical_Port": {
              "columns": {
                  "lswitch": {"type": {"key": {"type": "uuid",
                                               "refTable": "Logical_Switch",
                                               "refType": "strong"}}},
                  "pswitch": {"type": {"key": {"type": "uuid",
                                               "refTable":
"Physical_Switch",
                                               "refType": "strong"}}},
                  ...

This would have an impact on the OVN_Southbound database, as well.
No schema changes have been identified.  Entries in the Bindings
table would be the same, except that the UUID in the
logical_datapath column would refer to a Physical_Switch instead
of a Logical_Switch.  The contents of the Pipeline  and the
flows set up by ovn-controller would need to change.

In the case of a Physical_Switch instead of a Logical_Switch,
one major difference is that output to a port that is non-local is just
sent out to the physical network bridge instead of a tunnel.

Another difference is that packets for an unknown destination should
also be sent out to the physical network bridge instead of dropped.


If there is some consensus that supporting something like this makes
sense, I'm happy to take on the next steps, which would include a more
detailed proposal that includes Pipeline and flow details, as well as
the implementation.

Thanks,

-- 
Russell Bryant
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to