On 14/04/2023 10:30, Rob Kossler wrote:


    One of the things that puzzles me is that 12.5Msps just isn't that
    high a streaming rate, in fact it's totally supported over
      a *1* GBit interface.

    At 12.5Msps, that buffer fills(drains) in about 2.5ms. There's
    plenty of buffering on the host to buffer application scheduling
      issues, so I don't know where these underruns would be coming from.

I don't really know what the OS does in terms of "transmit" buffering (I'm slightly more confident on the OS behavior for the receive packets). I can say that avoiding "U" has always been harder for me than avoiding "O".  My concern is that the OS is not doing much of any buffering on the Tx side (perhaps none) such that if things pause for the 2.5ms you mentioned, then "U" occurs.

But, one more comment about incorporating the DRAM fifo: I noticed that Ettus has a BIST image that uses this FIFO for the N310 (see here <https://github.com/EttusResearch/uhd/blob/master/fpga/usrp3/top/n3xx/n310_bist_image_core.yml>). So, this would be a great example to use for creating a custom image.
Rob
The OS networking stack has buffering at various layers--including most NIC cards have buffering in both directions.

In linux, some of this is controlled through sysctl:

https://files.ettus.com/manual/page_usrp_x3x0_config.html#x3x0cfg_hostpc_netcfg_sockbuff

Last time I worked on a 1GBit ethernet hardware driver (10+ years ago), there was absolutely considerable buffering in   either direction.   Consider that an application may be presenting data at a rate much higher than the hardware can actually   "move those bits", the kernel will place the application in a WAIT state not when the hardware says "can't do it right now", but
  when the buffers become full.

When the NIC driver TXRDY (I'm paraphrasing here, it has been 10+ years) interrupt happens, the code pulls the next buffer
  off the FIFO and sends it to the hardware.

So, yes, there's buffering in both directions.   Which, you would entirely expect, would be able to "smooth over" small latency
  non-determinism in the application scheduling.


_______________________________________________
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com

Reply via email to