Proposal C, VLAN-aware-VMs, is about trying to integrate VLAN traffic from VMs 
with Neutron in a more tightly fashion.

It terminates the VLAN at the port connected to the VM. It does not bring in 
the VLAN concept further into Neutron. This is done by mapping each VLAN from 
the VM to a neutron network. After all, VLANs and Neutron networks are very 
much alike.

The modelling reuses the current port structure, there is one port on each 
network. The port still contains information relevant to that network.

By doing these things it's possible to utilize the rest of the features in 
Neutron, only features that have implementation close to VM has to be 
overlooked when implementing this. Other features that have attributes on a VM 
port but is realized remotely works fine, for example DHCP (including 
extra_dhcp_opts) and mechanism drivers that uses portbindings to do network 
plumbing on a switch.

After the Icehouse summit where we discussed the L2-gateway solution I started 
to implement an L2-gateway. The idea was to have an VM with a trunk port 
connected to a trunk network carrying tagged traffic. The network would then be 
connected to a L2-gateway for breaking out a single VLAN and connect it with a 
normal Neutron network. Following are some of the issues I encountered.

Currently a neutron port/network contains attributes related to one broadcast 
domain. A trunk network requires that many attributes are per broadcast domain. 
This would require a bigger refractory of Neutron port/network and affect all 
services using the ports/networks.
Due to this I dropped the track of tight integration with trunk network.

Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your existence. This IMO is ok then bridging Neutron network to some remote 
network, but if you have an Neutron VM and want to utilize various resources in 
another Neutron network (since the one you sit on does not have any resources), 
things gets, let's say non streamlined.

Another issue with trunk network is that it puts new requirements on the 
infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN 
based network it would be QinQ.

My requirements were to have low/no extra cost for VMs using VLAN trunks 
compared to normal ports, no new bottlenecks/single point of failure. Due to 
this and previous issues I implemented the L2 gateway in a distributed fashion 
and since trunk network could not be realized in reality I only had them in the 
model and optimized them away. But the L2-gateway + trunk network has a 
flexible API, what if someone connects two VMs to one trunk network, well, hard 
to optimize away.

Anyway, due to these and other issues, I limited my scope and switched to the 
current trunk port/subport model.

The code that is for review is functional, you can boot a VM with a trunk port 
+ subports (each subport maps to a VLAN). The VM can send/receive VLAN traffic. 
You can add/remove subports on a running VM. You can specify IP address per 
subport and use DHCP to retrieve them etc.

Thanks,
Erik



From: Bob Melander (bmelande) [mailto:bmela...@cisco.com]
Sent: den 24 oktober 2014 20:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

What scares me a bit about the "let's find a common solution for both external 
devices and VMs" approach is the challenge to reach an agreement. I remember a 
rather long discussion in the dev lounge in HongKong about trunking support 
that ended up going in all kinds of directions.

I work on implementing services in VMs so my opinion is definitely colored by 
that. Personally, proposal C is the most appealing to me for the following 
reasons: It is "good enough", a trunk port notion is semantically easy to take 
in (at least to me), by doing it all within the port resource Nova implications 
are minimal, it seemingly can handle multiple network types (VLAN, GRE, VXLAN, 
... they are all mapped to different trunk port local VLAN tags), DHCP should 
work to the trunk ports and its sub ports (unless I overlook something), the 
spec already elaborates a lot on details, there is also already code available 
that can be inspected.

Thanks,
Bob

From: Ian Wells <ijw.ubu...@cack.org.uk<mailto:ijw.ubu...@cack.org.uk>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: torsdag 23 oktober 2014 23:58
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

There are two categories of problems:
1. some networks don't pass VLAN tagged traffic, and it's impossible to detect 
this from the API
2. it's not possible to pass traffic from multiple networks to one port on one 
machine as (e.g.) VLAN tagged traffic
(1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else 
addresses this, particularly in the case that one VM is emitting tagged packets 
that another one should receive and Openstack knows nothing about what's going 
on.

We should get this in, and ideally in quickly and in a simple form where it 
simply tells you if a network is capable of passing tagged traffic.  In 
general, this is possible to calculate but a bit tricky in ML2 - anything using 
the OVS mechanism driver won't pass VLAN traffic, anything using VLANs should 
probably also claim it doesn't pass VLAN traffic (though actually it depends a 
little on the switch), and combinations of L3 tunnels plus Linuxbridge seem to 
pass VLAN traffic just fine.  Beyond that, it's got a backward compatibility 
mode, so it's possible to ensure that any plugin that doesn't implement VLAN 
reporting is still behaving correctly per the specification.

(2) is addressed by several blueprints, and these have overlapping ideas that 
all solve the problem.  I would summarise the possibilities as follows:
A. Racha's L2 gateway blueprint, 
https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension, which (at 
its simplest, though it's had features added on and is somewhat OVS-specific in 
its detail) acts as a concentrator to multiplex multiple networks onto one as a 
trunk.  This is a very simple approach and doesn't attempt to resolve any of 
the hairier questions like making DHCP work as you might want it to on the 
ports attached to the trunk network.
B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/, 
which is more limited in that it refers only to external connections.
C. Erik's VLAN port blueprint, 
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which tries to 
solve the addressing problem mentioned above by having ports within ports (much 
as, on the VM side, interfaces passing trunk traffic tend to have subinterfaces 
that deal with the traffic streams).
D. Not a blueprint, but an idea I've come across: create a network that is a 
collection of other networks, each 'subnetwork' being a VLAN in the network 
trunk.
E. Kyle's very old blueprint, 
https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api - 
where we attach a port, not a network, to multiple networks.  Probably doesn't 
work with appliances.

I would recommend we try and find a solution that works with both external 
hardware and internal networks.  (B) is only a partial solution.

Considering the others, note that (C) and (D) add significant complexity to the 
data model, independently of the benefits they bring.  (A) adds one new 
functional block to networking (similar to today's routers, or even today's 
Nova instances).
Finally, I suggest we consider the most prominent use case for multiplexing 
networks.  This seems to be condensing traffic from many networks to either a 
service VM or a service appliance.  It's useful, but not essential, to have 
Neutron control the addresses on the trunk port subinterfaces.
So, that said, I personally favour (A) is the simplest way to solve our current 
needs, and I recommend paring (A) right down to its basics: a block that has 
access ports that we tag with a VLAN ID, and one trunk port that has all of the 
access networks multiplexed onto it.  This is a slightly dangerous block, in 
that you can actually set up forwarding blocks with it, and that's a concern; 
but it's a simple service block like a router, it's very, very simple to 
implement, and it solves our immediate problems so that we can make forward 
progress.  It also doesn't affect the other solutions significantly, so someone 
could implement (C) or (D) or (E) in the future.
--
Ian.


On 23 October 2014 02:13, Alan Kavanagh 
<alan.kavan...@ericsson.com<mailto:alan.kavan...@ericsson.com>> wrote:
+1 many thanks to Kyle for putting this as a priority, its most welcome.
/Alan

-----Original Message-----
From: Erik Moe [mailto:erik....@ericsson.com<mailto:erik....@ericsson.com>]
Sent: October-22-14 5:01 PM
To: Steve Gordon; OpenStack Development Mailing List (not for usage questions)
Cc: iawe...@cisco.com<mailto:iawe...@cisco.com>
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


Hi,

Great that we can have more focus on this. I'll attend the meeting on Monday 
and also attend the summit, looking forward to these discussions.

Thanks,
Erik


-----Original Message-----
From: Steve Gordon [mailto:sgor...@redhat.com<mailto:sgor...@redhat.com>]
Sent: den 22 oktober 2014 16:29
To: OpenStack Development Mailing List (not for usage questions)
Cc: Erik Moe; iawe...@cisco.com<mailto:iawe...@cisco.com>; 
calum.lou...@metaswitch.com<mailto:calum.lou...@metaswitch.com>
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

----- Original Message -----
> From: "Kyle Mestery" <mest...@mestery.com<mailto:mest...@mestery.com>>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
>
> There are currently at least two BPs registered for VLAN trunk support
> to VMs in neutron-specs [1] [2]. This is clearly something that I'd
> like to see us land in Kilo, as it enables a bunch of things for the
> NFV use cases. I'm going to propose that we talk about this at an
> upcoming Neutron meeting [3]. Given the rotating schedule of this
> meeting, and the fact the Summit is fast approaching, I'm going to
> propose we allocate a bit of time in next Monday's meeting to discuss
> this. It's likely we can continue this discussion F2F in Paris as
> well, but getting a head start would be good.
>
> Thanks,
> Kyle
>
> [1] https://review.openstack.org/#/c/94612/
> [2] https://review.openstack.org/#/c/97714
> [3] https://wiki.openstack.org/wiki/Network/Meetings

Hi Kyle,

Thanks for raising this, it would be great to have a converged plan for 
addressing this use case [1] for Kilo. I plan to attend the Neutron meeting and 
I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to