On Wed, Feb 3, 2016 at 11:35 AM, Russell Bryant <russ...@ovn.org> wrote:
>
> On 01/30/2016 11:23 PM, Han Zhou wrote:
> > Before this patch, inter-chassis communication between VIFs of same
> > lswitch will always go through tunnel, which end up of modeling a
> > single physical network with many lswitches and pairs of lports, and
> > complexity in CMS like OpenStack neutron to manage the lswitches and
> > lports especially when ACLs are involved.
> >
> > With this patch, inter-chassis communication can go through physical
> > networks via localnet port with a 1:1 mapping between lswitches and
> > physical networks. The original tunneling mechanism will still be
> > used if there is no localnet port configured on the lswitch.
> >
> > Signed-off-by: Han Zhou <zhou...@gmail.com>
>
> The patches you based this on have been merged, but I can't seem to
> apply this patch.  I tried from the email and patchwork and both fail
> for me.

Yes, I have been waiting :). Git rebase just worked for me, so I will
provide v2.

>
> The patch came out smaller than I expected, so that's good.  I'm still
> thinking this over, but I'll share some thoughts.
>
> The model is indeed simpler in some ways, but it has its own new
> complexity.  Prior to this patch, OVN had full control over a logical
> network.  With this patch, an OVN logical switch has a different meaning
> and packets go through the logical pipeline differently.
>
> Previously, every packet did the following:
>
>   1) Enter a logical switch
>   2) Execute logical ingress pipeline
>   3) Execute logical egress pipeline (possibly on remote hypervisor)
>   4) Leave logical switch
>
> In this localnet mode, a packet destined for a logical port on a remote
> hypervisor has a new kind of path:
>
>   1) Enter a logical switch
>   2) Execute logical ingress pipeline
>   3) Execute logical ingress pipeline again on remote hypervisor
>   4) Execute logical egress pipeline
>   5) Leave logical switch
>
Thanks for the review! Yes, with this patch, lswitch datapath changed a
little when localnet port is involved. I may document it somewhere clearly.
But overall I think the flow is still clear and natural. It is in fact:

1) Enter a logical switch
2) Execute logical ingress pipeline (local)
3) Execute logical egress pipeline (local)
4) Leave logical switch (local)
5) Enter the logical switch again (remote hypervisor)
6) Execute logical ingress pipeline again (remote hypervisor)
7) Execute logical egress pipeline again (remote hypervisor)
8) Leave logical switch

In short, the steps are symmetrical: ingress local -> egress local ->
ingress remote -> egress remote.
And the extra part is just egress local -> ingress remote. It is taking the
same path as the old model but with this new model it just get
automatically implemented without explicitly maintain many extra lswitches
and lport pairs, which I think is a big benefit for CMSes and also avoids a
lot of redundant data in NB/SB DBs.

And there is not much change required in the code thanks to the clear
pipeline design :)

> I believe this model would also introduce some additional restrictions
> on the ACL syntax.  We can not necessarily uniquely identify the source
> logical port when the packet reaches the remote hypervisor, so the use
> of "inport" in "to-lport" ACLs won't work, at least.

This is true, but it is the same situation with current solution where
"inport" cannot be used because it doesn't work across lswitches. But I
think I need to document this clearly to avoid confusion.

>
> I'm not necessarily suggesting you should go try to fix these things,
> because I don't know that you can.  I'm just trying to think through the
> impact of these changes on the semantics of an OVN logical switch.
>
> --
> Russell Bryant




--
Best regards,
Han
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to