Hi Phelps,

I've been dealing with latency issues of my own, and this seems to be the best 
solution so far (big thanks to Josh!):

http://lists.gnu.org/archive/html/discuss-gnuradio/2013-04/msg00211.html

The nice thing about this method is that it allows you to manage buffering at a 
system level as opposed to per-block.  I've been able to limit my latency to 
about 50 ms, but you can probably do better depending on how much processing 
you're doing.  Hope this works for you!

Jordan


On Apr 22, 2013, at 10:59 PM, Phelps Williams wrote:

> I created a block which in the work(), sits in select on a udp socket with a 
> timeout of 10msec.  In the event it times out rather than getting a read 
> signal, a pattern of noutput_items is written.  In the event it receives data 
> from the udp socket, the udp dgram is copied to the output items vector.  
> This flowgraph ends with a UHD USRP sink.
> 
> I am consistently observing latency through my flow graph of around 2 - 
> 2.5sec.  The total latency doesn't appear to be dependent on the sample rate 
> (I've tested 1Msps and 2Msps).  When intentionally underruning the flow graph 
> by setting set_max_noutput_items to less than 10msec of samples, the latency 
> through the graph is between 80msec and 200msec.  Intentionally underrunning 
> UHD isn't an option for my application as an intermittent modulated signal 
> from the USRP causes issues.
> 
> I am attempting to use the global (via gr.top_block.start()) and block 
> specific max_noutput_items settings to limit the total amount of latency in 
> my flow graph but this doesn't appear to be the primary source of latency.
> 
> My suspicion is that UHD or the USRP has a transmit buffer which is the 
> source of my problem.  I tried messing with the send_buff_size and 
> recv_buff_size specified here: 
> http://files.ettus.com/uhd_docs/manual/html/transport.html but these didn't 
> seem to make any impact.
> 
> I played with setting set_max_noutput_items on the source block to the exact 
> value expected during the 10msec select timeout in my custom source block.  
> This works for a while but occasionally underruns.  Also as udp traffic moves 
> through the block, the total amount of latency slowly increases.  It also 
> seems wrong to throttle the flowgraph at both the source and the sink.  Much 
> like it is problematic to use a throttle with a UHD source or sink block.  
> set_max_noutput_items doesn't seem like the right solution for this problem.
> 
> Decreasing GR_FIXED_BUFFER_SIZE in gr_flat_flowgraph makes a significant 
> improvement but it isn't a viable option for my application because it is a 
> universal setting for all gnuradio based radios running on my system.  Some 
> of which require the default buffer size to operate efficiently.
> 
> There has been a little discussion of this topic on the mailing list in the 
> past but I haven't found examples that include UHD / USRP with a flowgraph 
> that isn't intentionally underrunning (ie with a typical UDP source block).
> 
> I have messed with the maximum socket buffer sizes allowed by linux.  The uhd 
> recommended config and much smaller variations don't appear to make an impact 
> to this latency.
> $  sudo sysctl net.core.rmem_max
> net.core.rmem_max = 50000000
> $ sudo sysctl net.core.wmem_max
> net.core.wmem_max = 1048576
> 
> For reference I am running:
> UHD 003.005.002
> Gnuradio 3.6.4.1
> 
> tl;dr
> My expectation is that there is a buffer UHD is attempting to maintain, when 
> this buffer gets low, it triggers upstream work operations in the flowgraph 
> to refill this buffer.  Is this not the case?  How can I manipulate the size 
> of this buffer?  Is there any way to confirm the size of this buffer?
> 
> Thanks!
> 
> -Phelps
> _______________________________________________
> Discuss-gnuradio mailing list
> Discuss-gnuradio@gnu.org
> https://lists.gnu.org/mailman/listinfo/discuss-gnuradio

_______________________________________________
Discuss-gnuradio mailing list
Discuss-gnuradio@gnu.org
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio

Reply via email to