On 04/30/2019 11:43 AM, Luke Whittlesey wrote:
Just my 2cents, but I think that sometimes running at a higher ADC
sample rate and then digitally filtering may be desirable. I can't say
anything specifically about this example with the E312 because I'm not
familiar with the pre-selection filters and the analog filters in the
AD9361, but a higher sample rate generally allows the analog filter
more rolloff before the ADC aliases that energy back in.
Yes, and you end up with higher dynamic range as well--but there are
diminishing returns.
On Tue, Apr 30, 2019 at 10:53 AM Marcus D. Leech via USRP-users
<[email protected] <mailto:[email protected]>> wrote:
On 04/30/2019 09:15 AM, Jason Matusiak wrote:
I guess I would need a block to count samples if I am going to a
null sink? Otherwise I am not sure how to guage how many samples
have passed.
I was just thinking to look at runtime--for a large enough
sample-count, the initial startup overhead would be a small
fraction of the total
runtime.
You could use the "benchmark_rate" tool to do this as well.
Well, this is probably ignorant of me, but I assumed a higher
master clock rate would allow me some sort of speed benefit
somewhere. I guess I can't say what since it has nothing to do
with the Linux CPU speed.... What is the benefit to running at a
slower rate?
No, master_clock_rate has *nothing* to do with CPU speed. It just
controls the rate that the ADC/DSP/DDC chain runs at in the radio
section.
There's nothing inherently *wrong* with running at a very high
decimation ratio, it's just that it isn't *necessary*.
------------------------------------------------------------------------
*From:* USRP-users <[email protected]>
<mailto:[email protected]> on behalf of Marcus
D. Leech via USRP-users <[email protected]>
<mailto:[email protected]>
*Sent:* Monday, April 29, 2019 8:33 PM
*To:* [email protected] <mailto:[email protected]>
*Subject:* Re: [USRP-users] E312 wrong sample rate
On 04/29/2019 03:28 PM, Jason Matusiak via USRP-users wrote:
I was debugging a problem with a flowgraph when I realized that
I wasn't getting the amount of samples I expected passing out of
the USRP source block. If I set a sample rate too low, it tells
me it has to set the sample rate to 0.125MSps. Currently I have
a single stream from my source block, 30MHz clock rate, 500kHz
sample rate.
If I run for 20 seconds streaming the data to a file (unbuffered
set to off) as a complex, I would expect to see 20s * 8B *
500KHz = 80MB of data in the file.
Instead, running it empirically (so the numbers will have to be
ballpark and not exact), I see file size of 116153944. If I
make the assumption that the sample rate was really 500kHz, that
means it ran for 29.03s. This is obviously off by 50%. If I
assume that 10s of data was really collected, that means I had
an actual sample rate of 1.451924MSps.
If I run these tests with the minimal 125kHz sample rate, I see
things off by about double what I would expect.
Moving my sample rate around the 1MSps range seems to work
closer to what I expect, but of course I can't write files that
fast without getting 'O' on the screen. Ultimately I need to
use two receivers, so I don't believe that I can push the clock
rate above 30.72MHz.
I am running UHD-3_14 with RFNoC enabled (though I am not using
RFNoC in this particular flowgraph). What am I missing here?
Have it write to /dev/null, and time how long it takes to gather
some large number of samples, and go from there.
If your delivered sample rate is 500ksps, I don't see why you
need a master clock rate as high as 30Msps, but perhaps you have
your reasons.
_______________________________________________
USRP-users mailing list
[email protected] <mailto:[email protected]>
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
_______________________________________________
USRP-users mailing list
[email protected]
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com