Jeff - Inline.
> -----Original Message----- > From: Jeffrey Haas <jh...@pfrc.org> > Sent: Thursday, September 26, 2019 12:50 PM > To: Les Ginsberg (ginsberg) <ginsb...@cisco.com> > Cc: Ketan Talaulikar (ketant) <ket...@cisco.com>; Reshad Rahman > (rrahman) <rrah...@cisco.com>; rtg-bfd@ietf.org > Subject: Re: WGLC for draft-ietf-bfd-large-packets > > Les, > > On Tue, Sep 24, 2019 at 10:48:51PM +0000, Les Ginsberg (ginsberg) wrote: > > A few more thoughts - maybe these are more helpful than my previous > comments - maybe not. I am sure you will let me know. > > > > Protocol extensions allowing negotiation and/or advertisement of support > for larger PDUs may well be useful - but let's agree that it is desirable to > deploy this without protocol extensions just to keep the interoperability bar > low. > > > > My primary angst is with the following paragraph in Section 3: > > > > "It is also worthy of note that even if an implementation can function > > with larger transport PDUs, that additional packet size may have > > impact on BFD scaling. Such systems may support a lower transmission > > interval (bfd.DesiredMinTxInterval) when operating in large packet > > mode. This interval may depend on the size of the transport PDU." > > > > Given long experience that CPU use correlates more highly with number of > packets than with number of bytes, the first sentence would seem to be > weakly supported. > > Given the previously mentioned concerns about detection time, the > second sentence seems to compromise the value of the extension. > > My experience is largely identical to yours. > > The motivation for mentioning anything at all here is TANSTAAFL[1], and > we've already had people ask about possible impacts. And, as we discussed > previously in the thread we shall inevitably get asked about it during TSV > review in IESG. > > The primary reason this is a "may" in the non-RFC 2119 sense is that our > experience also suggests that when the scaling impacts are primarily pps > rather than bps that this feature will likely have no major impact on > implementations beyond your valid concerns about exercising bugs. > > I suspect had this not been mentioned at all, you would have been happier.. > But you're not the target audience for this weak caveat. > [Les:] I am not opposed to a discussion of potential issues in the draft - rather I am encouraging it. But the current text isn't really on the mark as far as potential issues - and we seem to agree on that. It also suggests lengthening detection time to compensate - which I think is not at all what you want to suggest as it diminishes the value of the extension. It also isn't likely to address a real problem. For me, the potential issues are: a)Some BFD implementations might not be able to handle MTU sized BFD packets - not because of performance - but because they did not expect BFD packets to be full size and therefore might have issues passing a large packet through the local processing engine. b)Accepted MTU is impacted by encapsulations and what layer is being considered (L2 or L3). And oftentimes link MTUs do not match on both ends ("shudder"), so you might end up with unidirectional connectivity. I appreciate that this is exactly the problem that the extensions are designed to detect. I am just asking that these issues be discussed more explicitly as an aid to the implementor. If that also makes Transports ADs happier that is a side benefit - but that's not my motivation. > > What might be better? > > > > 1)Some statement that MTU isn't necessarily a consistent value for all > systems connected to an interface - which can impact the results when large > BFD packets are used. Implementations might then want to consider > supporting "bfd-mtu" configuration and/or iterating across a range of packet > sizes to determine what works and what doesn't. > > I'm not clear what you intend by this statement. > > Are you asking that we emphasize the use case in a different way? The > Introduction currently states: > "However, > some applications may require that the Path MTU [RFC1191] between > those two systems meets a certain minimum criteria. When the Path > MTU decreases below the minimum threshold, those applications may > wish to consider the path unusable." > > I'm also unclear what "Implementations" may refer to here. BFD? An > arbitrary user application? If the latter, the application may not have > strict control over the generation of a given PDU size; e.g. TCP > applications. > [Les:] I am talking about BFD implementations. I suppose one can imagine each BFD client requesting a certain MTU value - but that wouldn't be my choice. I would think the value we want is really the maximum L3 payload that the link is intended to support - which should be independent of the BFD client. This might be larger than any client actually uses - but that seems like a good thing. Les > > 2)Use of both padded and unpadded packets in combination with draft- > ietf-bfd-stability to determine whether a BFD failure is due to padding or a > generic forwarding failure. > > > > Either of these suggestions are really "diagnostic modes" which may help > diagnose a problem but aren't meant to be used continuously as part of fast > failure detection. > > We could certainly add a paragraph or two as an application note about using > this for BFD stability purposes as well. > > -- Jeff