Hi Jeff,

Imagine two scenarios which were already highlighted as justification for
this work:

*Scenario 1 -* IGP with nodes interconnected with ECMP links

*Scenario 2 -* IGP nodes interconnected with L2 emulated circuits which in
turn are riding on telco IP network with ECMPs or LAGs.

*Questions Ad 1 - *

Is the idea to use in those cases "ECMP-Aware BFD for LDP LSPs" vendor's
feature to be able to detect MTU issues on any of the L3 paths ? Is there
feature extension to accomplish the same without LDP just when using ECMP
with OSPF ?

How do you solve this when there is L2 LAG hashing across N links enabled ?

*Question Ad 2 - *

How do you detect it if your L2 circuit provider maps BFD flows to one
underlay path and some encapsulated data packets is hashed to traverse the
other path(s) ? Clearly running multiple BFD sessions is not going to help
much in this scenario .... For example if someone is using v6 flow label it
may be directly copied to the outer service header.

Many thx,
Robert.


On Thu, Sep 26, 2019 at 9:46 PM Jeffrey Haas <jh...@pfrc.org> wrote:

> Les,
>
> On Tue, Sep 24, 2019 at 10:48:51PM +0000, Les Ginsberg (ginsberg) wrote:
> > A few more thoughts - maybe these are more helpful than my previous
> comments - maybe not. I am sure you will let me know.
> >
> > Protocol extensions allowing negotiation and/or advertisement of support
> for larger PDUs may well be useful - but let's agree that it is desirable
> to deploy this without protocol extensions just to keep the
> interoperability bar low.
> >
> > My primary angst is with the following paragraph in Section 3:
> >
> > "It is also worthy of note that even if an implementation can function
> >    with larger transport PDUs, that additional packet size may have
> >    impact on BFD scaling.  Such systems may support a lower transmission
> >    interval (bfd.DesiredMinTxInterval) when operating in large packet
> >    mode.  This interval may depend on the size of the transport PDU."
> >
> > Given long experience that CPU use correlates more highly with number of
> packets than with number of bytes, the first sentence would seem to be
> weakly supported.
> > Given the previously mentioned concerns about detection time, the second
> sentence seems to compromise the value of the extension.
>
> My experience is largely identical to yours.
>
> The motivation for mentioning anything at all here is TANSTAAFL[1], and
> we've already had people ask about possible impacts.  And, as we discussed
> previously in the thread we shall inevitably get asked about it during TSV
> review in IESG.
>
> The primary reason this is a "may" in the non-RFC 2119 sense is that our
> experience also suggests that when the scaling impacts are primarily pps
> rather than bps that this feature will likely have no major impact on
> implementations beyond your valid concerns about exercising bugs.
>
> I suspect had this not been mentioned at all, you would have been happier.
> But you're not the target audience for this weak caveat.
>
> > What might be better?
> >
> > 1)Some statement that MTU isn't necessarily a consistent value for all
> systems connected to an interface - which can impact the results when large
> BFD packets are used. Implementations might then want to consider
> supporting "bfd-mtu" configuration and/or iterating across a range of
> packet sizes to determine what works and what doesn't.
>
> I'm not clear what you intend by this statement.
>
> Are you asking that we emphasize the use case in a different way?  The
> Introduction currently states:
>   "However,
>    some applications may require that the Path MTU [RFC1191] between
>    those two systems meets a certain minimum criteria.  When the Path
>    MTU decreases below the minimum threshold, those applications may
>    wish to consider the path unusable."
>
> I'm also unclear what "Implementations" may refer to here.  BFD?  An
> arbitrary user application?  If the latter, the application may not have
> strict control over the generation of a given PDU size; e.g. TCP
> applications.
>
> > 2)Use of both padded and unpadded packets in combination with
> draft-ietf-bfd-stability to determine whether a BFD failure is due to
> padding or a generic forwarding failure.
> >
> > Either of these suggestions are really "diagnostic modes" which may help
> diagnose a problem but aren't meant to be used continuously as part of fast
> failure detection.
>
> We could certainly add a paragraph or two as an application note about
> using
> this for BFD stability purposes as well.
>
> -- Jeff
>
>

Reply via email to