On 19 May 2016 at 03:57, Salvatore Orlando <salv.orla...@gmail.com> wrote:

> [Accidentally sent message before completing, resuming here]
>
> Hello,
>
> I have been working for a while on integration with kubernetes with a CNI
> plugin for OVN.
> The work in [1] is forked by Guru's repository by the same name [2].
>
> Most consumers of the CNI interface have an expectation that when returning
> from the plugin the container interface is fully configured and ready to
> send/receive data.
> For the OVN case this means that when returning from the CNI both VIF
> plugging and logical configuration (lport and ACLs) must be completed.
>
> However, in the current implementation logical port management was moved
> out of the plugin [3] in order to avoid calling into the OVN NB Database.
> This means that in order to fulfil the "network ready" expectation there's
> a need for adding a synchronisation point in the CNI plugin (ie: a blocking
> call waiting for a "ready" event).
>
> At this stage I am wondering whether it might be actually better to revert
> to the previous state - where logical port creation was performed in the
> plugin [4].
>
The disadvantage of moving it back would be that we now need to provide a
way for the plugins to securely talk to OVN NB. This in itself is not a big
deal for the "overlay" mode for a single tenant as we have ssl support. But
this will become a problem for multi-tenancy as now a container breakout
would essentially have full control over your network.

So, ideally your current model is better. The plugins have access to k8s
API server itself. So one way to synchronize is to pass the ready event via
a pod annotation? You probably have better ideas.



> I am wondering whether it might be fair to expect ovn-northd to be
> accessible from the control plane. What is your opinion?
>
> Should that be the way to go, the CNI plugin should also take care of
> implementing ACLs before returning (not just applying a drop-all rule as it
> does now [5]) - because otherwise networking configuration would not be
> complete (especially with a drop-all rule!).
>
> Salvatore
>
> [1] https://github.com/salv-orlando/ovn-kubernetes
> [2] https://github.com/shettyg/ovn-kubernetes
> [3]
>
> https://github.com/salv-orlando/ovn-kubernetes/commit/b951079fe3100160478f0cbc0eaf6729c088a4af
> [4]
>
> https://github.com/salv-orlando/ovn-kubernetes/blob/78c7b39715894cfa64066b294a80f55d2c01e356/bin/ovn_cni.py#L191
> [5]
>
> https://github.com/salv-orlando/ovn-kubernetes/blob/master/ovn_k8s/conn_processor.py#L59
>
>
>
>
>
>
> On 19 May 2016 at 12:38, Salvatore Orlando <salv.orla...@gmail.com> wrote:
>
> > Hello,
> >
> > I have been working for a while on integration with kubernetes with a CNI
> > plugin for OVN.
> > The work in [1] is forked by Guru's repository by the same name [2].
> >
> > Most consumers of the CNI interface have an expectation that when
> > returning from the plugin the container interface is fully configured and
> > ready to send/receive data.
> > For the OVN
> >
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to