Matt Mathis <matt.mat...@gmail.com> wrote:
>      - There is not a conformance specification for jumbo, and
> interoperability is not guaranteed
> ...
> - Alarming discovery when PLPMTUD [RFC 4121] was almost done
>      - we encountered a device (gbic)  that ran error free at 1500B, but
> showed 1% loss at 4kB

If IEEE is unresponsive, there is another place to go to improve a
specification.

GBIC, SFP, SFP+, SFP28, QSFP, etc are not defined by IEEE.  They are
defined in "Multi-Source Agreements" defined by the "Small Form Factor
Committee", which started ad-hoc, and was absorbed by the Storage Networking
Industry Association in mid-2016.

The point of the multi-source agreement is to allow customers to buy
from any source participating, because all of those sources have agreed
to meet the specifications in the agreement.

  https://en.wikipedia.org/wiki/Small_Form-factor_Pluggable
  https://members.snia.org/document/dl/26184

The SFP specification at that last URL has many "length" fields, but
they all refer to how long the cable or fiber can be from transmitter to
receiver.  There are no specs in there about how long each *packet* can
be; those seem to have been punted to the relevant IEEE Ethernet
specifications (for e.g. 1000BASE-T or 1000BASE-LX; each SFP specifies
which subset among those specs it is compatible with).

A future evolution of these ad-hoc standards COULD be written to specify
a maximum frame length that the SFP supports at the upstream standard's
specified error rate.  And it would be prudent to add information fields
to the SFP's "Serial ID" data block, which if set would specify the
device's error rate at each supported and tested packet length (by
default, just at 1500 bytes).  Then the equipment that that SFP plugs
into could determine whether it was willing to accept (or e.g. try to
correct) that error rate for packets of a given size, versus reporting a
failure upstream without ever trying to send a packet of a particular
size through that SFP.  It could also log the issue, so the equipment
owner can consider upgrading an SFP that's only spec'd for short frames,
to a compatible one whose manufacturer fully tested its support for
larger frames.

(This is a more flexible approach than trying to mandate that the SFP,
or the device that it plugs into, should deterministically FAIL every
time a frame of a particular size is presented to it.  It also requires
no active circuitry, merely reporting upstream the results of
manufacturer testing of that batch of interface chips or modules.  And
furthermore, writing a new IETF spec requiring deterministic failure, as
was proposed in the slides, would not eliminate old pre-spec equipment
from the Internet, and thus will not solve the problem.  So the question
is more about how to make incremental improvements as equipment is
upgraded, or as a jumbo support requirement is imposed on an existing
network.)

The SFP spec says:

  Documents created by the SFF Committee are expected to be submitted to
  bodies such as EIA (Electronic Industries Association) or an ASC
  (Accredited Standards Committee). They may be accepted for separate
  standards, or incorporated into other standards activities.

The SFF committee's current home is:

  https://www.snia.org/sff

For example, they published SFF-8679, the QSFP+ 4X spec, on 2023-09-07,
and SFF 8665, the QSFP28 spec, for implementing pluggable 100 Gb
Ethernet.  (Those specs also say nothing about error rates or frame
lengths.)

The device that failed 1% of jumbo packets was a GBIC?  The GBIC standard
is defined by SFF-8053i, rev 5.5 of September 2000, which also has no
packet-size specification:

  https://members.snia.org/document/dl/26895

But, was the failure in the GBIC's receive circuitry?  Or in the
circuitry in the Ethernet chip's deserializer on the board that hosted
the GBIC?  Or was it a failure to produce error-free signals in the
transmitter that was on the other end of the cable, sending to the GBIC?
The slide says "the device was under beta test and had to be returned",
but what year was this that a GBIC or something that it plugs into was
being "beta" tested?  Are 20-year-old devices implementing the GBIC
interface not considered obsolete nowadays?  If the packet length
problem is not actually widespread in more modern equipment, should it
derail efforts to standardize jumbo packets?

I wonder if you might want to explore broader testing, as well as
contacting the SFF committee to discuss this packet-length versus
error-rate topic there.

You can read the ugly commentary in the 2006 IEEE "Higher Speed Ethernet
Study Group" (40Gb and 100Gb) via the link below.  The architects on
that committee had enough trouble just increasing speeds, and basically
wanted the frame size issue to go away.  So they decided that even
parties who want to offer jumbo frame support in their own networks,
will have no IEEE spec to point to for it.  Even an honest effort to
merely say "PHYs and physical layer specifications shall not be done in
any way that precludes transmission of frames up to XX Kbytes in length"
was rejected.  See:

  https://www.ieee802.org/3/ba/index.html
  
https://odysseus.ieee.org/query.html?col=stds&qp=url%3A%2F3%2Fba%2Fpublic&qp=url%3A%2F3%2Fhssg%2Femail&qp=url%3A%2F3%2Fhssg%2Fpublic&qt=jumbo&qs=&qc=stds&ws=0&qm=0&st=1&nh=25&lk=1&rf=0&oq=&rq=0

        John
        

_______________________________________________
Int-area mailing list -- int-area@ietf.org
To unsubscribe send an email to int-area-le...@ietf.org

Reply via email to