By setting the buffer size to `MTU` in the YAML file, the generated FIFO depth was being set to 16 instead of the MTU size, which is 1024. I don't think that's what was intended. But the value used for the input/output buffers of the DDC/DUC isn't really very important, which is why it hasn't caused any problems. There are other buffers in the data path to make up for that.
An MTU size of 1024 in the YAML means 1024 CHDR words, which is 2048 samples or 8192 bytes. I think the intent was to buffer one full packet, which is where the MTU of 1024 comes from. But it's not strictly necessary because of the buffers in the stream endpoints and radios. You want blocks to be able to output data at a continuous rate that exceeds the target sample rate. If a block is bursty, then you need buffers to compensate for that. Stream endpoints are very bursty, so they have the largest buffers. The radio is not bursty at all, but has strict real-time requirements, so it has buffers to guard against stalls/burstiness elsewhere in the data stream. Wade On Mon, Feb 28, 2022 at 4:38 PM Rob Kossler <rkoss...@nd.edu> wrote: > Thanks Wade, > Regarding the typo, are you saying that the Ettus generated images are > using the wrong value for the DDC/DUC buffers or are you saying that the > generated images are OK, but just the yaml has a typo (that is not > affecting the generated images)? > > And, second question, what is the principle guiding Ettus' decision to > want an input buffer of 1024 at the DDC? If the concept is to buffer 1 > full packet, shouldn't it be twice that value (assuming 1 sample per clock)? > Rob > > On Mon, Feb 28, 2022 at 3:36 PM Wade Fife <wade.f...@ettus.com> wrote: > >> I looked at the generated DDC/DUC code again and that looks like a typo >> in the YAML. It should really be `2**MTU`, which is 1024, instead of just >> `MTU` (which is 10). So if you want a larger buffer on your block, try >> 2**MTU or some other power of 2. >> >> Wade >> >> On Sat, Feb 26, 2022 at 5:01 PM Wade Fife <wade.f...@ettus.com> wrote: >> >>> Regarding the overflows, that's the kind of thing I would simulate to >>> understand what's happening. My guess is the zero insertion is blocking the >>> flow of data, and the radio overflows because of that. Like you said, a >>> bigger buffer should help. >>> >>> Hmm, I was just glancing at the code generated by the RFNoC image >>> builder and the way it's setting the buffers doesn't look right, so perhaps >>> you're not getting the buffering you expect. Let me look into that and >>> get back to you. >>> >>> As for fitting, I would start by removing everything you don't need from >>> the YAML description. Do you need all 4 radio channels? Do you need RX and >>> TX? Do you need 4 channels of replay? Do you need the DDC and DUC (if you >>> only want to run at the master clock rate then you don't). Strip out all >>> the blocks you don't need and all their stream endpoints. If it still >>> doesn't fit, then I would look at reducing the endpoint buffer sizes. The >>> default images usually make them as large as we could fit, but you might be >>> able to get away with smaller buffers, which really only affects TX >>> streaming performance. >>> >>> Wade >>> >>> >>> >>> On Fri, Feb 25, 2022 at 4:19 PM Rob Kossler <rkoss...@nd.edu> wrote: >>> >>>> I was able to build successfully with the 'dram' clock as the 'ce' >>>> clock for my rfnoc block. But, I didn't get the performance I was >>>> expecting. With my rfnoc graph of >>>> "Radio->DDC->custom-zero-padded-fft-block", the Radio had overflows when >>>> running at 125e6 but worked well when running 62.5e6. >>>> >>>> My current thought is that maybe I don't have enough input buffering in >>>> my custom rfnoc block. I initially had my payload input and output buffer >>>> sizes (defined in the block def yaml) set to 'MTU' which is how the DDC >>>> block does it. But, when my build failed (attempting to add 4 of my custom >>>> blocks), I changed this from 'MTU' to '32'. Turns out that this didn't help >>>> my build succeed, but I did get a successful build after removing all >>>> Replay blocks / SEPs. So I am now trying to re-build with the 'MTU' setting >>>> with the hope that the increased buffering will allow me to run at 125e6 >>>> sample rate. >>>> >>>> But, apart from more buffering, is there perhaps a different >>>> explanation why my custom FFT block clocked at 300 MHz (with 50% insertion >>>> of zeros) is not keeping up? >>>> >>>> On a semi-related topic, I'm wondering if anyone has suggestions >>>> regarding my build failures. The build error indicates that I needed more >>>> slices than are available (out of 69350 total, 47879 are available, but I >>>> needed 49414). If I look at the build report for the default Ettus N310 XG >>>> image (see snippet below), it looks like there is not much availability for >>>> extra rfnoc blocks (96.44% util%). And, in my experience, this is where my >>>> builds usually fail. I am wondering what I can do in the design of my >>>> custom blocks (or in the build parameters of the N310) to achieve >>>> successful builds - specifically related to this slice utilization. Any >>>> suggestions welcome. >>>> Thanks. >>>> Rob >>>> >>>> // From build report of default Ettus N310 XG image >>>> 2. Slice Logic Distribution >>>> --------------------------- >>>> >>>> >>>> +--------------------------------------------+--------+-----------+-------+ >>>> | Site Type | Used | Available | >>>> Util% | >>>> >>>> +--------------------------------------------+--------+-----------+-------+ >>>> | Slice | 66878 | 69350 | 96.44 >>>> | >>>> | SLICEL | 40816 | | >>>> | >>>> | SLICEM | 26062 | | >>>> | >>>> >>>> On Thu, Feb 24, 2022 at 9:25 PM Rob Kossler <rkoss...@nd.edu> wrote: >>>> >>>>> Thanks for the suggestions Wade. I will first try the low-hanging >>>>> fruit of using the 300MHz DRAM clock. Fingers crossed! >>>>> Rob >>>>> >>>>> On Thu, Feb 24, 2022 at 6:43 PM Wade Fife <wade.f...@ettus.com> wrote: >>>>> >>>>>> Hi Rob, >>>>>> >>>>>> RFNoC doesn't support generating user clocks for you yet (the range >>>>>> value is not currently used). You could use the `dram` clock on N310 and >>>>>> connect that to the `ce` inputs of your blocks. That should be about 300 >>>>>> MHz. The `rfnoc_chdr` clock is 200 MHz on N310. >>>>>> >>>>>> If it won't close timing with the dram clock, and you want something >>>>>> slower, then you can modify the HDL to add the clock you want. Take a >>>>>> look >>>>>> at n3xx_clocking.v. You could probably modify the misc_clock_gen IP block >>>>>> to add a clock closer to 260 MHz. You'd then have to route that clock >>>>>> into >>>>>> n3xx_core then rfnoc_image_core, and add the new clock to n310_bsp.yml >>>>>> for >>>>>> the rfnoc_image_builder to generate code to use it. Adding custom clocks >>>>>> is >>>>>> a pretty manual process at the moment. >>>>>> >>>>>> Wade >>>>>> >>>>>> On Wed, Feb 23, 2022 at 10:15 PM Rob Kossler <rkoss...@nd.edu> wrote: >>>>>> >>>>>>> Hi, >>>>>>> I have a signal processing block that includes a zero-padded FFT >>>>>>> (50% zeros) that I built for the N310. Because of the throttling >>>>>>> that occurs during insertion of zeros, I expect that my FFT will need >>>>>>> to be >>>>>>> clocked at a bit more than twice the max sample rate. So, since I want >>>>>>> to >>>>>>> operate the N310 at the highest sample rate of 125 MS/s, it seems that >>>>>>> my >>>>>>> FFT will need to be clocked >= 260 MHz. I'm wondering how to do it. >>>>>>> >>>>>>> I've looked at the RFNoC specification and my block is already set >>>>>>> up to use the "CE" clock for both control & data. In the rfnoc spec, it >>>>>>> mentions that I can enter a "range" for my clock in the block definition >>>>>>> yaml. But, I also see that in the end, the top N310 yaml will require >>>>>>> me to >>>>>>> map a _device clock to my block's CE clock port. >>>>>>> >>>>>>> It's not clear to me how this works. Does it help to provide a range >>>>>>> in the block definition yaml? Or, perhaps it is even necessary? How do >>>>>>> I >>>>>>> specify in the top N310 yaml which device clock will map to my blocks CE >>>>>>> clock port? It seems to me that I am missing a step (defining a clock >>>>>>> somewhere?). >>>>>>> >>>>>>> I am pretty much a novice, so I expect that this is the cause of my >>>>>>> confusion. I am even struggling to figure out what the current clock >>>>>>> rates >>>>>>> are (rfnoc_ctrl, rfnoc_chdr, ce, etc) and where they are defined. Any >>>>>>> help >>>>>>> would be appreciated. >>>>>>> Rob >>>>>>> _______________________________________________ >>>>>>> USRP-users mailing list -- usrp-users@lists.ettus.com >>>>>>> To unsubscribe send an email to usrp-users-le...@lists.ettus.com >>>>>>> >>>>>>
_______________________________________________ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com