Hi Ahmad,
I think both, sink and source, start consuming/producing samples as fast
as they can. If the buffer for a hardware sink runs empty, you will get
"U"s printed in the command line, for under-run. If a buffer after a
hardware source runs full, you will get "O"s printed, for over-run.
If nothing is printed, no samples will get lost and each block runs as
fast as it can.
Also check out this tutorial on sample rates, which explains this in
more detail:
https://wiki.gnuradio.org/index.php?title=Sample_Rate_Tutorial
Best,
Fabian
Am 25.04.23 um 08:23 schrieb Ahmad Oweis:
Hi all,
I'm investigating the factors behind the latency in my simple GRC flow
graph, I have a theory and I'd be grateful if someone can confirm it or
refute it.
Say I have a simple flow graph consisting of a file source connected to
a hardware sink.
My understanding: when I run the flow graph, the source starts producing
samples and storing them in the buffer. In the meantime, the hardware
sink is initializing (loading FPGA, etc.). Once the hardware is ready to
transmit samples, it starts consuming from the buffer.
This initialization delay adds to the overall system latency. Is this
correct? or does the source only start producing samples after the
hardware is initialized?
If my understanding is correct, how can we avoid this delay? Is there a
way to ask the file source to wait until the hardware is ready and then
start sending samples?
Thank you
--
Ahmad Oweis