If I would like to create a test suite that is driver agnostic, then I
would need to increase the boundaries at which I conduct each
individual test case. Right now, I test jumbo frame behavior -1, +1,
or at the MTU boundary. But, packets in TestPMD are dropped or
forwarded based on the MTU size plus additional ethernet overhead. We
would need to agree on a universal expected value for ethernet
overhead across all devices, or come up with another solution.

I created a Bugzilla ticket to discuss this further, it might make
sense to discuss this outside of this thread.

https://bugs.dpdk.org/show_bug.cgi?id=1476




On Tue, Jun 25, 2024 at 5:57 PM Thomas Monjalon <tho...@monjalon.net> wrote:
>
> Nicholas, you are writing a test for the API.
> You should not adapt to the driver behaviour.
> If the driver does not report what we can expect from the API definition,
> it is a bug.
>
> Ferruh, please can you explain what is the problem with MTU sizes?
>
>
> 25/06/2024 21:57, Nicholas Pratte:
> > The previous comments led me to go on an investigation about how MTUs
> > are allocated in testpmd. --max-pkt-len, which the documentation
> > states is defaulted to 1518, will shave off 18 bytes to account for
> > the 14 byte Ethernet frame and the Dot1Q headers. This is the case
> > when using Mellanox, but for both Intel and Broadcom, when explicitly
> > setting --max-pkt-len to 1518, the MTU listed on 'show port info' is
> > 1492. So there are 26 bytes being shaved off, leaving 8 unknown bytes.
> > Does anyone know what these extra 8 bytes are? I wondered if these
> > might be VXLAN, FCS or something else, but it might just be easier to
> > ask and see if anyone knows; I can't find anything about it in
> > documentation.
> >
> > As far as how this relates to the test suite at hand, the
> > send_packet_and_capture() method will need to be reworked to
> > compensate for the extra 4 Dot1Q header bytes, but I'm still curious
> > about the extra 8 bytes on the Intel and Broadcom devices I've tested
> > on; again, these bytes are not present on Mellanox, which is just
> > bound to their specific kernel drivers. When I manually set the
> > --max-pkt-len to 1518, with the MTU then set to 1492 bytes, packets
> > sent to the SUT can only be, including frame size, 1514 bytes in size.
> > So I'm looking at the DPDK MTU output plus 22, even when using 'port
> > config mtu 0 1500,' for instance, so I'm not sure why 26 bytes is
> > subtracted here.
> >
> > On Fri, Jun 21, 2024 at 7:55 PM Honnappa Nagarahalli
> > <honnappa.nagaraha...@arm.com> wrote:
> > >
> > >
> > >
> > > > On Jun 21, 2024, at 5:18 PM, Stephen Hemminger 
> > > > <step...@networkplumber.org> wrote:
> > > >
> > > > On Fri, 21 Jun 2024 17:19:21 -0400
> > > > Nicholas Pratte <npra...@iol.unh.edu> wrote:
> > > >
> > > >> +The test suite ensures the consistency of jumbo frames transmission 
> > > >> within
> > > >> +Poll Mode Drivers using a series of individual test cases. If a Poll 
> > > >> Mode
> > > >> +Driver receives a packet that is greater than its assigned MTU 
> > > >> length, then
> > > >> +that packet will be dropped, and thus not received. Likewise, if a 
> > > >> Poll Mode Driver
> > > >> +receives a packet that is less than or equal to a its designated MTU 
> > > >> length, then the
> > > >> +packet should be transmitted by the Poll Mode Driver, completing a 
> > > >> cycle within the
> > > >> +testbed and getting received by the traffic generator. Thus, the 
> > > >> following test suite
> > > >> +evaluates the behavior within all possible edge cases, ensuring that 
> > > >> a test Poll
> > > >> +Mode Driver strictly abides by the above implications.
> > > >
> > > > There are some weird drivers where MRU and MTU are not the same thing.
> > > > I believe the e1000 HW only allowed setting buffer size to a power of 2.
> > > > At least on Linux, that meant that with 1500 byte MTU it would receive 
> > > > an up to 2K packet.
> > > > This never caused any problem for upper layer protocols, just some 
> > > > picky conformance tests.
> > > The test cases should not concern themselves with individual PMD 
> > > behaviors. They should be based on the API definition.
> > >
> >
>
>
>
>
>

Reply via email to