On 04/29/2019 03:28 PM, Jason Matusiak via USRP-users wrote:
I was debugging a problem with a flowgraph when I realized that I
wasn't getting the amount of samples I expected passing out of the
USRP source block. If I set a sample rate too low, it tells me it has
to set the sample rate to 0.125MSps. Currently I have a single stream
from my source block, 30MHz clock rate, 500kHz sample rate.
If I run for 20 seconds streaming the data to a file (unbuffered set
to off) as a complex, I would expect to see 20s * 8B * 500KHz = 80MB
of data in the file.
Instead, running it empirically (so the numbers will have to be
ballpark and not exact), I see file size of 116153944. If I make the
assumption that the sample rate was really 500kHz, that means it ran
for 29.03s. This is obviously off by 50%. If I assume that 10s of
data was really collected, that means I had an actual sample rate of
1.451924MSps.
If I run these tests with the minimal 125kHz sample rate, I see things
off by about double what I would expect.
Moving my sample rate around the 1MSps range seems to work closer to
what I expect, but of course I can't write files that fast without
getting 'O' on the screen. Ultimately I need to use two receivers, so
I don't believe that I can push the clock rate above 30.72MHz.
I am running UHD-3_14 with RFNoC enabled (though I am not using RFNoC
in this particular flowgraph). What am I missing here?
Have it write to /dev/null, and time how long it takes to gather some
large number of samples, and go from there.
If your delivered sample rate is 500ksps, I don't see why you need a
master clock rate as high as 30Msps, but perhaps you have
your reasons.
_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com