Hi Joe,

think we are on the same page - your experience regarding OAM with
X-bone seems to match what I think on how it should be used in NVO3.
And if that is the case, then we have a need of a control protocol
between the VTEPs, here we can then add different extensions and keep
the overlay streams free from becoming cumbersome to implement for the
hardware architects/designers.

Continuing to think out of the box, exploring if we could make the
ECMP underlay network less cloudy for the VTEPs and provide some more
visibility of the paths within the underlay network by this overlay
tunnel control plane protocol (OTCP).
Lets say you have underlay network with for spine switches, we have
this OTCP defined and since it is a control plane solution the current
underlay routers and hw VTEPs could be upgraded with a sw release
supporting OTCP (no hw upragde is required). If the underlay network
is hashing by the transport protocol's source port, then perhaps the
VTEPs could discover the four different paths in a two-tier leaf-spine
network with four spines by sending OTCP traceroutes through the
network with different UDP source ports. Once four paths are
discovered, start the OAM over these four paths, if so desired. If you
need some CoS in the network to be provided for the VTEPs - e.g  you
have end systems streams that have subscribed for less overbooked
paths in the underlay and other end systems streams subscribed for
best effort then perhaps the OTCP could have some affinity fields, set
by the spine routers on the ports, to distinguish which paths in the
underlay network are for premium or best effort end systems. If there
is congestion on some of the path, the VTEP discover that by the OAM
mechanism and then it could change the UDP source port and the traffic
is switched to another path. Sure that there could be more use cases -
if this is doable or needed at all, early ideas, to be explored if the
group finds it interesting.

It seems that it would be a good idea to separate the streaming
overlay tunnel encapsulation, optimized for hardware offloading, so
the stream can stay in the data plane (NIC, ASICs), have several
encapsulation schemes supported so we are not closing the innovation
door for forthcoming encapsulation schemes either. OAM needs to be in
the control plane of the VTEPs, should be generic for all
encapsulation schemes, here we could also add extensions and have
other VTEP control plane innovations supported.

Regards,
Patrick

On Thu, Aug 11, 2016 at 2:28 AM, Joe Touch <[email protected]> wrote:
>
>
> On 8/10/2016 10:47 AM, Patrick Frejborg wrote:
>> Hi Joe,
>>
>> I have not stated that we should use RTCP/SIP as such, as you said
>> "using SIP as anything but general inspiration is useful here" is
>> totally correct.
> My misinderstanding; glad we're on the same page.
>
>
>> The VTEPs will more and more be deployed inside
>> hypervisors, hence running only this RTCP stylish monitoring protocol
>> (or other underlay monitoring solutions) only inside the underlay is
>> not good enough - it has to be deployed between VTEPs.
>
> I'm not clear on how it matters where you run any of this - the network
> should never care.
>
> I'm not suggesting you run the monitoring only in the underlay; I'm
> suggesting that whatever monitoring you run in the overlay can be
> basically the same as what you would run in an underlay. I.e., I'm not
> clear that overlays need new kinds of overlay-specific monitoring -
> except for endpoint liveness, which the control protocol ought to be
> doing anyway (FWIW, this is how we solved this issue in the X-Bone
> system - the control protocol had heartbeats for liveness and all other
> monitoring was done on the overlay itself).
>> If you
>> integrate that inside overlay tunnels and there are a lot of tunnels
>> between the VTEPs, then you most likely have a lot of OAMs going
>> between the VTEPs, doing the duplicate job for the same path because
>> they are integrated into the streams.
> Yes, you'd have lots of OAMs between the endpoints, but each could be
> for a different tunnel. Each tunnel might be treated differently by the
> underlay, so that kind of per-tunnel OAM is required IMO anyway. And the
> overhead of per-tunnel OAM ought to be small compared to the tunnel
> traffic (by design) anyway.
>
>> That OAM has to be managed by
>> the VTEPs - I have no idea how much that will burden the VTEP's
>> resources. Would it be more resource efficient to figure out a
>> mechanism so that you can setup OAM sessions between the hypervisor
>> VTEPs, covering all possibly ECMP paths through the underlay and no
>> more, regardless of how many overlay tunnels exists between the VTEPs?
> You can't do that unless you know how to control ECMP path
> differentiation in the underlay, which is rarely known.
>
>> And if that can be done, then you could add mandatory and optional
>> TLVs for other functions that can not be handled by the overlay
>> controller to be sent between the VTEPs - all this shouldn't interfere
>> hardware offloading for the end system streams between the VTEPs.
> OAM is control plane and thus typically in the software path anyway. All
> you need is a way to know whether to pass the traffic to the controller
> or not at the endpoints, and the encapsulation tells you that (you use
> the same mechanism that would apply in the underlay to know the
> difference between OAM traffic and user traffic).
>
> --
>
> Overall, IMO if you can tell you're on an overlay from within the
> overlay, you did it wrong (because someone WILL eventually want to run
> an overlay over the overlay too) - our system was capable of doing so
> over 15 levels deep. That's why we didn't treat OAM as integral to the
> tunneling mechanism.
>
> Joe

_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to