> > It’s not a matter of *what* the program is reading, but *where* it's 
> > reading in the buffer. This makes it usable for *all* programs reading this 
> > file format, not just Wireshark. Prefixing it with zero padding (even a 
> > nibble) would achieve that.
> As would changing the spec to indicate that the preamble may reflect the 
> length of the preamble as received, and thus that it's from 1 to 7 octets.

Formally, 802.3br requires all receivers to accept arbitrary preamble length.
I would argue that a dissector should aim to act like a receiver.
No point in limiting it to 7 octets.

> I infer from what Timmy Brolin, and from IEEE Std 802.3-2018, that there's no 
> guarantee that the receiver will see all the preamble bits sent by the MAC 
> layer, so I don't see this a indicating how long the packet was on the wire.  
> At least as I read section 22.2.3.2.2 "Receive case" of 802.3-2018, in the 
> clause talking about 100 Mbit Ethernet, all or none of the preamble may be 
> received over the MII - "Table 22–4 depicts the case where no preamble 
> nibbles are conveyed across the MII, and Table 22–5 depicts the case where 
> the entire preamble is conveyed across the MII." - (and I suspect any value 
> between "all" and "none" may be received).

> At least as I read Figure 24-11 "Receive state diagram, part a", on the 
> wire/fibre, the preamble begins with two special 5-bit code-groups J and K, 
> in order, indicating the beginning of a bit stream.  After that come more 
> 5-bit code-groups which encode the nibbles of the preamble, SFD, and data 
> (including the MAC header, payload, and FCS).  I infer from that diagram that 
> the preamble isn't used for synchronization on the wire; it may be used for 
> synchronization between the MII and the PHY.

It works a bit different for different Ethernet standards.
* On 10Mbit Ethernet, the preamble IS used for synchronization on the wire. In 
this case, preamble bits can in some sense be "lost on the wire". Or the 
receiver needed an extra bit transition or two to sync up.
* On 100Mbit Ethernet, the PCS replaces the first two nibbles of the preamble 
by J and K. Then on the receiving end it again replaces J and K by two preamble 
nibbles.
* On Gigabit Ethernet there is a 16bit code group replacing the first bytes of 
preamble. In some sense similar to J and K, but 16bits long.


> So it sounds as if a short preamble could be received because:
>       the transmitting station didn't send the entire preamble down its MII 
> (which means the transmitter is cheating, given that 22.2.3.2.1 "Transmit 
> case" says 7 octets are sent down the MII), and thus it wasn't put on the 
> wire after the J/K;

There are specialized examples of where this is intentionally done to improve 
performance.
The full 7 byte preamble is essentially wasted performance on 100Mbit and 
gigabit Ethernet.
There are also cases where the implementation of the MAC layer cheats and drops 
some preamble to simplify its implementation. Not very common, but there are 
such implementations.

>       the transmitting station did send the entire preamble down its MII, but 
> the receiving station's Reconciliation Sublayer (RS) didn't manage to sync up 
> with its Physical Coding Sublayer (PCS) because it didn't sync up with the 
> PCS immediately - it needed to see a few bit transitions;

This can happen on 10Mbit Ethernet.
But it is also quite common for 100Mbit and Gigabit PHYs which chops off some 
preamble to save latency or simplify implementation.

>       the PCS Just Didn't Bother Sending The Full Preamble.

Very common practice today, unfortunately. They do it to reduce latency or 
simplify implementation.

> I don't think bits of the preamble would be lost *over the wire*, as I infer 
> that the receipt of the J/K starts the reception process, and if it's not in 
> sync with the transmitter at that point, no frame is going to be received.

J/K only exists on 100Mbit Ethernet. 10Mbit Ethernet is different.

You can also have network equipment on the wire which chops off some preamble. 
Can happen in some copper/fiber media converters for example.

> I also *suspect* that "the PCS Just Didn't Bother Sending The Full Preamble" 
> isn't likely to be the cause of a short preamble.
> A capture at the RS layer can't, as far as I know, distinguish between the 
> first two cases.

Indeed.

> The first case indicates that the transmitting station is trying to reduce 
> its use of network bandwidth and/or reduce the latency for the packet.
> The second case indicates - something?
> Other PHYs may behave differently.
> The reasons for *not* padding a short preamble that I can see would be
>       1) extra stuff for the receiver to do if it receives a short preamble;
>       2) loss of an indication that the preamble is short - that indication 
> is presumably of interest to people reading the capture for purposes of 
> diagnosing low-level Ethernet issues (meaning "probably of interest to people 
> capturing on a network *NOT* using 802.3br" - as far as I know, there's 
> nothing about 802.3br that makes the length of the preamble more relevant), 
> otherwise nobody'd be asking for dissectors to handle short preambles.

Preamble length becomes interesting in certain real-time communication 
scenarios for example.
Shortening the preamble changes the length of the packet, and hence it chances 
some aspects of timing on the network. Therefore it is of interest to have an 
accurate view of this when dissecting packets.
802.3br is often used together with other parts of the "TSN" umbrella of 
standards in real-time communication protocols.


In extreme cases, you can have enough preamble-chopping devices on the wire to 
actually threaten the integrity of your packets. That is also something which 
would be useful to monitor for troubleshooting purposes.
For example if you are unlucky and both your MAC and PHY on both transmitting 
and receiving end chops one byte each, then you are down to 3 bytes of 
preamble. Now add two media converters which by chance also happens to be of 
preamble-chopping type. Now you are down to 1 byte, which means your packet is 
corrupted, because gigabit Ethernet replaces two preamble bytes with a 16bit 
codewords to signal start of frame.
This is rare and extreme, but possible. And certainly useful to be able to 
troubleshoot.

Regards,
Timmy Brolin

___________________________________________________________________________
Sent via:    Wireshark-dev mailing list <wireshark-dev@wireshark.org>
Archives:    https://www.wireshark.org/lists/wireshark-dev
Unsubscribe: https://www.wireshark.org/mailman/options/wireshark-dev
             mailto:wireshark-dev-requ...@wireshark.org?subject=unsubscribe
___________________________________________________________________________
Sent via:    Wireshark-dev mailing list <wireshark-dev@wireshark.org>
Archives:    https://www.wireshark.org/lists/wireshark-dev
Unsubscribe: https://www.wireshark.org/mailman/options/wireshark-dev
             mailto:wireshark-dev-requ...@wireshark.org?subject=unsubscribe

Reply via email to