Yeah, I think that's right. Of course, Open vSwitch will be able to support both eventually. But for large deployments, managing the edge MAC tables, as well as tunneling and tagging rules (and any other filtering or QoS policy) will almost certainly require a centralized component. Also, as you point out, multicast is a non-trivial deployment hurdle. There are table sizing concerns, and performance issues on group joins in many implementations. But perhaps more significantly are the operational risks that have bitten must of us in the past (and are the reason multicast is often shunned in large deployments).

.m

Yes, VXLAN tunnel header is a good proposal, but for control plane
there is serve limitation: it depend on physical network multicast for
MAC learning. In OVS, central ovsdb controlled MAC address propagation
is a better choice.

On Wed, Sep 7, 2011 at 9:09 AM, Justin Pettit<jpet...@nicira.com>  wrote:
On Sep 7, 2011, at 9:01 AM, Nicky Fatr wrote:

I don't think that TRILL/802.1AQ L2 over L2 is a good option for large
scale deployment. L2 over L3 instead is more scalable, eliminating
comlexity of physical network.

maybe we can expect L2 over UDP in some future release, for UDP is
more friendly than GRE in some networking configuration.
You can already do L2-over-L3 with CAPWAP.  It doesn't support a configurable 
context identifier (key), but a patch has been provided by Valient Gough and 
Simon Horman that adds it.  We're also looking at supporting VXLAN, which was 
recently announced:

        http://tools.ietf.org/html/draft-mahalingam-dutt-dcops-vxlan-00

--Justin



_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss


--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Casado
Nicira Networks, Inc.
www.nicira.com | www.openvswitch.org
cell: 650-776-1457
~~~~~~~~~~~~~~~~~~~~~~~~~~~

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to